15,178 research outputs found
Multi-Objective Archiving
Most multi-objective optimisation algorithms maintain an archive explicitly
or implicitly during their search. Such an archive can be solely used to store
high-quality solutions presented to the decision maker, but in many cases may
participate in the search process (e.g., as the population in evolutionary
computation). Over the last two decades, archiving, the process of comparing
new solutions with previous ones and deciding how to update the
archive/population, stands as an important issue in evolutionary
multi-objective optimisation (EMO). This is evidenced by constant efforts from
the community on developing various effective archiving methods, ranging from
conventional Pareto-based methods to more recent indicator-based and
decomposition-based ones. However, the focus of these efforts is on empirical
performance comparison in terms of specific quality indicators; there is lack
of systematic study of archiving methods from a general theoretical
perspective. In this paper, we attempt to conduct a systematic overview of
multi-objective archiving, in the hope of paving the way to understand
archiving algorithms from a holistic perspective of theory and practice, and
more importantly providing a guidance on how to design theoretically desirable
and practically useful archiving algorithms. In doing so, we also present that
archiving algorithms based on weakly Pareto compliant indicators (e.g.,
epsilon-indicator), as long as designed properly, can achieve the same
theoretical desirables as archivers based on Pareto compliant indicators (e.g.,
hypervolume indicator). Such desirables include the property limit-optimal, the
limit form of the possible optimal property that a bounded archiving algorithm
can have with respect to the most general form of superiority between solution
sets.Comment: 21 pages, 4 figures, journa
An adaptation reference-point-based multiobjective evolutionary algorithm
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.It is well known that maintaining a good balance between convergence and diversity is crucial to the performance of multiobjective optimization algorithms (MOEAs). However, the Pareto front (PF) of multiobjective optimization problems (MOPs) affects the performance of MOEAs, especially reference point-based ones. This paper proposes a reference-point-based adaptive method to study the PF of MOPs according to the candidate solutions of the population. In addition, the proportion and angle function presented selects elites during environmental selection. Compared with five state-of-the-art MOEAs, the proposed algorithm shows highly competitive effectiveness on MOPs with six complex characteristics
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
In the field of multi-objective optimization algorithms, multi-objective
Bayesian Global Optimization (MOBGO) is an important branch, in addition to
evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes
Gaussian Process models learned from previous objective function evaluations to
decide the next evaluation site by maximizing or minimizing an infill
criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement
(EHVI), which shows a good performance on a wide range of problems, with
respect to exploration and exploitation. However, so far it has been a
challenge to calculate exact EHVI values efficiently. In this paper, an
efficient algorithm for the computation of the exact EHVI for a generic case is
proposed. This efficient algorithm is based on partitioning the integration
volume into a set of axis-parallel slices. Theoretically, the upper bound time
complexities are improved from previously and , for two- and
three-objective problems respectively, to , which is
asymptotically optimal. This article generalizes the scheme in higher
dimensional case by utilizing a new hyperbox decomposition technique, which was
proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of
the multilayered integration scheme that scales linearly in the number of
hyperboxes of the decomposition. The speed comparison shows that the proposed
algorithm in this paper significantly reduces computation time. Finally, this
decomposition technique is applied in the calculation of the Probability of
Improvement (PoI)
On the Impact of Multiobjective Scalarizing Functions
Recently, there has been a renewed interest in decomposition-based approaches
for evolutionary multiobjective optimization. However, the impact of the choice
of the underlying scalarizing function(s) is still far from being well
understood. In this paper, we investigate the behavior of different scalarizing
functions and their parameters. We thereby abstract firstly from any specific
algorithm and only consider the difficulty of the single scalarized problems in
terms of the search ability of a (1+lambda)-EA on biobjective NK-landscapes.
Secondly, combining the outcomes of independent single-objective runs allows
for more general statements on set-based performance measures. Finally, we
investigate the correlation between the opening angle of the scalarizing
function's underlying contour lines and the position of the final solution in
the objective space. Our analysis is of fundamental nature and sheds more light
on the key characteristics of multiobjective scalarizing functions.Comment: appears in Parallel Problem Solving from Nature - PPSN XIII,
Ljubljana : Slovenia (2014
Optimization as a design strategy. Considerations based on building simulation-assisted experiments about problem decomposition
In this article the most fundamental decomposition-based optimization method
- block coordinate search, based on the sequential decomposition of problems in
subproblems - and building performance simulation programs are used to reason
about a building design process at micro-urban scale and strategies are defined
to make the search more efficient. Cyclic overlapping block coordinate search
is here considered in its double nature of optimization method and surrogate
model (and metaphore) of a sequential design process. Heuristic indicators apt
to support the design of search structures suited to that method are developed
from building-simulation-assisted computational experiments, aimed to choose
the form and position of a small building in a plot. Those indicators link the
sharing of structure between subspaces ("commonality") to recursive
recombination, measured as freshness of the search wake and novelty of the
search moves. The aim of these indicators is to measure the relative
effectiveness of decomposition-based design moves and create efficient block
searches. Implications of a possible use of these indicators in genetic
algorithms are also highlighted.Comment: 48 pages. 12 figures, 3 table
- …