135 research outputs found
Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point
Many optimization problems arising in applications have to consider several
objective functions at the same time. Evolutionary algorithms seem to be a very
natural choice for dealing with multi-objective problems as the population of
such an algorithm can be used to represent the trade-offs with respect to the
given objective functions. In this paper, we contribute to the theoretical
understanding of evolutionary algorithms for multi-objective problems. We
consider indicator-based algorithms whose goal is to maximize the hypervolume
for a given problem by distributing {\mu} points on the Pareto front. To gain
new theoretical insights into the behavior of hypervolume-based algorithms we
compare their optimization goal to the goal of achieving an optimal
multiplicative approximation ratio. Our studies are carried out for different
Pareto front shapes of bi-objective problems. For the class of linear fronts
and a class of convex fronts, we prove that maximizing the hypervolume gives
the best possible approximation ratio when assuming that the extreme points
have to be included in both distributions of the points on the Pareto front.
Furthermore, we investigate the choice of the reference point on the
approximation behavior of hypervolume-based approaches and examine Pareto
fronts of different shapes by numerical calculations
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
Scalarized Preferences in Multi-objective Optimization
Multikriterielle Optimierungsprobleme verfĂŒgen ĂŒber keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den PrĂ€ferenzen des Entscheiders genĂŒgen, der den Kompromiss implementiert. Skalarisierung â die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl â identifiziert eine einzige Lösung als globales PrĂ€ferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusĂ€tzlichen Informationen ĂŒber andere Kompromisslösungen, die die PrĂ€ferenzen des Entscheiders bezĂŒglich des globalen Optimums verĂ€ndern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter PrĂ€ferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der PrĂ€ferenzen als Problemtransformationen charakterisiert, die prĂ€ferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wĂŒnschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfĂŒllen. Die algorithmische Analyse kennzeichnet PrĂ€ferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. FĂŒr beide Paradigmen werden Algorithmen entworfen, die skalarisierte PrĂ€ferenzeninformationen verwenden: PrĂ€ferenzen-verzerrte Paretofrontapproximationen verteilen Punkte ĂŒber die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale PrĂ€ferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden fĂŒr die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die NĂŒtzlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin FahrplĂ€ne fĂŒr ein Blockheizkraftwerk zu finden, die die erzeugte ElektrizitĂ€t und WĂ€rme maximieren und den Kraftstoffverbrauch minimiert. PrĂ€ferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswĂ€hlen kann, indem er die Konflikte zwischen den drei Zielen abwĂ€gt. Das zweite Problem beschĂ€ftigt sich mit der Erstellung von FahrplĂ€nen fĂŒr GerĂ€te in einem WohngebĂ€ude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima FahrplĂ€ne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den PrĂ€ferenzen der Entscheider ĂŒbereinstimmen
Optimal Ό-Distributions for the Hypervolume Indicator for Problems With Linear Bi-Objective Fronts: Exact and Exhaustive Results
corrected author versionInternational audienceTo simultaneously optimize multiple objective functions, several evolutionary multiobjective optimization (EMO) algorithms have been proposed. Nowadays, often set quality indicators are used when comparing the performance of those algorithms or when selecting ``good'' solutions during the algorithm run. Hence, characterizing the solution sets that maximize a certain indicator is crucial---complying with the optimization goal of many indicator-based EMO algorithms. If these optimal solution sets are upper bounded in size, e.g., by the population size Ό, we call them optimal Ό-distributions. Recently, optimal Ό-distributions for the well-known hypervolume indicator have been theoretically analyzed, in particular, for bi-objective problems with a linear Pareto front. Although the exact optimal Ό-distributions have been characterized in this case, not all possible choices of the hypervolume's reference point have been investigated. In this paper, we revisit the previous results and rigorously characterize the optimal Ό-distributions also for all other reference point choices. In this sense, our characterization is now exhaustive as the result holds for any linear Pareto front and for any choice of the reference point and the optimal Ό-distributions turn out to be always unique in those cases. We also prove a tight lower bound (depending on Ό) such that choosing the reference point above this bound ensures the extremes of the Pareto front to be always included in optimal Ό-distributions
Seeding the Initial Population of Multi-Objective Evolutionary Algorithms: A Computational Study
Most experimental studies initialize the population of evolutionary
algorithms with random genotypes. In practice, however, optimizers are
typically seeded with good candidate solutions either previously known or
created according to some problem-specific method. This "seeding" has been
studied extensively for single-objective problems. For multi-objective
problems, however, very little literature is available on the approaches to
seeding and their individual benefits and disadvantages. In this article, we
are trying to narrow this gap via a comprehensive computational study on common
real-valued test functions. We investigate the effect of two seeding techniques
for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8
objectives. We observe that some functions (e.g., DTLZ4 and the LZ family)
benefit significantly from seeding, while others (e.g., WFG) profit less. The
advantage of seeding also depends on the examined algorithm
Hypervolume in Biobjective Optimization Cannot Converge Faster Than âŠ
International audienceThe hypervolume indicator is widely used by multi-objective optimization algorithms and for assessing their performance. We investigate a set of vectors in the biobjective space that maximizes the hypervolume indicator with respect to some reference point, referred to as -optimal distribution. We prove explicit lower and upper bounds on the gap between the hypervolumes of the -optimal distribution and the -optimal distribution (the Pareto front) as a function of , of the reference point, and of some Lipschitz constants. On a wide class of functions, this optimality gap can not be smaller than , thereby establishing a bound on the optimal convergence speed of any algorithm. For functions with either bilipschitz or convex Pareto fronts, we also establish an upper bound and the gap is hence . The presented bounds are not only asymptotic. In particular, functions with a linear Pareto front have the normalized exact gap of for any reference point dominating the nadir point. We empirically investigate on a small set of Pareto fronts the exact optimality gap for values of up to 1000 and find in all cases a dependency resembling
Bi-objective facility location in the presence of uncertainty
Multiple and usually conflicting objectives subject to data uncertainty are
main features in many real-world problems. Consequently, in practice,
decision-makers need to understand the trade-off between the objectives,
considering different levels of uncertainty in order to choose a suitable
solution. In this paper, we consider a two-stage bi-objective single source
capacitated model as a base formulation for designing a last-mile network in
disaster relief where one of the objectives is subject to demand uncertainty.
We analyze scenario-based two-stage risk-neutral stochastic programming,
adaptive (two-stage) robust optimization, and a two-stage risk-averse
stochastic approach using conditional value-at-risk (CVaR). To cope with the
bi-objective nature of the problem, we embed these concepts into two criterion
space search frameworks, the -constraint method and the balanced box
method, to determine the Pareto frontier. Additionally, a matheuristic
technique is developed to obtain high-quality approximations of the Pareto
frontier for large-size instances. In an extensive computational experiment, we
evaluate and compare the performance of the applied approaches based on
real-world data from a Thies drought case, Senegal
A Practical Guide to Multi-Objective Reinforcement Learning and Planning
Real-world decision-making tasks are generally complex, requiring trade-offs
between multiple, often conflicting, objectives. Despite this, the majority of
research in reinforcement learning and decision-theoretic planning either
assumes only a single objective, or that multiple objectives can be adequately
handled via a simple linear combination. Such approaches may oversimplify the
underlying problem and hence produce suboptimal results. This paper serves as a
guide to the application of multi-objective methods to difficult problems, and
is aimed at researchers who are already familiar with single-objective
reinforcement learning and planning methods who wish to adopt a multi-objective
perspective on their research, as well as practitioners who encounter
multi-objective decision problems in practice. It identifies the factors that
may influence the nature of the desired solution, and illustrates by example
how these influence the design of multi-objective decision-making systems for
complex problems
A model of anytime algorithm performance for bi-objective optimization
International audienceAnytime algorithms allow a practitioner to trade-off runtime for solution quality. This is of particular interest in multi-objective combinatorial optimization since it can be infeasible to identify all efficient solutions in a reasonable amount of time. We present a theoretical model that, under some mild assumptions, characterizes the âoptimalâ trade-off between runtime and solution quality, measured in terms of relative hypervolume, of anytime algorithms for bi-objective optimization. In particular, we assume that efficient solutions are collected sequentially such that the collected solution at each iteration maximizes the hypervolume indicator, and that the non-dominated set can be well approximated by a quadrant of a superellipse. We validate our model against an âoptimalâ model that has complete knowledge of the non-dominated set. The empirical results suggest that our theoretical model approximates the behavior of this optimal model quite well. We also analyze the anytime behavior of an Δ-constraint algorithm, and show that our model can be used to guide the algorithm and improve its anytime behavior
Hypervolume based metaheuristics for multiobjective optimization
The purpose of multiobjective optimization is to find solutions that are optimal
regarding several goals. In the branch of vector or Pareto optimization all these
goals are considered to be of equal importance, so that compromise solutions that
cannot be improved regarding one goal without deteriorating in another are Paretooptimal.
A variety of quality measures exist to evaluate approximations of the Paretooptimal
set generated by optimizers, wherein the hypervolume is the most significant
one, making the hypervolume calculation a core problem of multiobjective
optimization. This thesis tackles that challenge by providing a new hypervolume algorithm
from computational geometry and analyzing the problemâs computational
complexity.
Evolutionary multiobjective optimization algorithms (EMOA) are state-of-the-art
methods for Pareto optimization, wherein the hypervolume-based algorithms belong
to the most powerful ones, among them the popular SMS-EMOA. After its
promising capabilities have already been demonstrated in first studies, this thesis
is dedicated to deeper understand the underlying optimization process of the
SMS-EMOA and similar algorithms, in order to specify their performance characteristics.
Theoretical analyses are accomplished as far as possible with established
and newly developed tools. Beyond the limitations of rigorous scrutiny, insights
are gained via thorough experimental investigation. All considered problems are
continuous, whereas the algorithms are as well applicable to discrete problems.
More precisely, the following topics are concerned. The process of approaching
the Pareto-optimal set of points is characterized by the convergence speed, which
is analyzed for a general framework of EA with hypervolume selection on several
classes of bi-objective problems. The results are achieved by a newly developed
concept of linking single and multiobjective optimization. The optimization on the
Pareto front, that is turning the population into a set with maximal hypervolume,
is considered separately, focusing on the question under which circumstances the
steady-state selection of exchanging only one population member suffices to reach a
global optimum. We answer this question for different bi-objective problem classes.
In a benchmarking on so-called many-objective problems of more than three objectives,
the qualification of the SMS-EMOA is demonstrated in comparison to other
EMOA, while also studying their cause of failure. Within the mentioned examinations,
the choice of the hypervolumeâs reference point receives special consideration
by exposing its influence. Beyond the study of the SMS-EMOA with default setup,
it is analyzed to what extent the performance can be improved by parameter tuning
of the EMOA anent to certain problems, focusing on the influence of variation operators.
Lastly, an optimization algorithm based on the gradient of the hypervolume
is developed and hybridized with the SMS-EMOA
- âŠ