314 research outputs found
A Weakly Pareto Compliant Quality Indicator
In multi-objective optimization problems, the optimization target is to obtain a set of non-dominated solutions. Comparing solution sets is crucial in evaluating the performances of different optimization algorithms. The use of performance indicators is common in comparing those sets and, subsequently, optimization algorithms. A good solution set must be close to the Pareto-optimal front, well-distributed, maximally extended and fully filled. Therefore, an effective performance indicator must encompass these features as a whole and must be Pareto dominance compliant. Unfortunately, some of the known indicators often fail to properly reflect the quality of a solution set or cost a lot to compute. This paper demonstrates that the Degree of Approximation (DOA) quality indicator, is a weakly Pareto compliant unary indicator that gives a good estimation of the match between the approximated front and the Pareto-optimal front. Moreover, DOA computation is easy and fast
GALAXY: A new hybrid MOEA for the Optimal Design of Water Distribution Systems
This is the final version of the article. Available from American Geophysical Union via the DOI in this record.The first author would like to appreciate the financial support given by both the University of Exeter and the China Scholarship Council (CSC) toward the PhD research. We also appreciate the three anonymous reviewers, who help improve the quality of this paper substantially. The source code of the latest versions of NSGA-II and ε-MOEA can be downloaded from the official website of Kanpur Genetic Algorithms Laboratory via http://www.iitk.ac.in/kangal/codes.shtml. The description of each benchmark problem used in this paper, including the input file of EPANET and the associated best-known Pareto front, can be accessed from the following link to the Centre for Water Systems (http://tinyurl.com/cwsbenchmarks/). GALAXY can be accessed via http://tinyurl.com/cws-galaxy
How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance
With modern requirements, there is an increasing tendency of considering
multiple objectives/criteria simultaneously in many Software Engineering (SE)
scenarios. Such a multi-objective optimization scenario comes with an important
issue -- how to evaluate the outcome of optimization algorithms, which
typically is a set of incomparable solutions (i.e., being Pareto non-dominated
to each other). This issue can be challenging for the SE community,
particularly for practitioners of Search-Based SE (SBSE). On one hand,
multi-objective optimization could still be relatively new to SE/SBSE
researchers, who may not be able to identify the right evaluation methods for
their problems. On the other hand, simply following the evaluation methods for
general multi-objective optimization problems may not be appropriate for
specific SE problems, especially when the problem nature or decision maker's
preferences are explicitly/implicitly available. This has been well echoed in
the literature by various inappropriate/inadequate selection and
inaccurate/misleading use of evaluation methods. In this paper, we first carry
out a systematic and critical review of quality evaluation for multi-objective
optimization in SBSE. We survey 717 papers published between 2009 and 2019 from
36 venues in seven repositories, and select 95 prominent studies, through which
we identify five important but overlooked issues in the area. We then conduct
an in-depth analysis of quality evaluation indicators/methods and general
situations in SBSE, which, together with the identified issues, enables us to
codify a methodological guidance for selecting and using evaluation methods in
different SBSE scenarios.Comment: This paper has been accepted by IEEE Transactions on Software
Engineering, available as full OA:
https://ieeexplore.ieee.org/document/925218
Performance Metrics Ensemble for Multiobjective Evolutionary Algorithms
There are five types of unary performance metrics and two types of binary performance metrics. However, no single metric can faithfully measure MOEA performance. Moreover, every metric has its unique character; no metrics can substitute others completely. Ensemble method is introduced to compare EAs by combining a large number of single metrics using modified Double Tournament Selection. Double Tournament Selection can maximum protects the qualified individual from being lost by some stochastic factors in a comparison time. This ensures the final result is the really best one and the whole ensemble process is effective and precise. Therefore, performance metrics ensemble can overcome the lost information problem by the single metric which only provides some specific but limited information. Furthermore, ensemble method avoids the choosing process which is a heavy computational process and can be directly used to assessing EAs. Finally, from the experiment results by using performance metrics ensemble, Each MOEA's characteristic is summarized.School of Electrical & Computer Engineerin
Optimization for Image Segmentation
Image segmentation, i.e., assigning each pixel a discrete label, is an essential task in computer vision with lots of applications. Major techniques for segmentation include for example Markov Random Field (MRF), Kernel Clustering (KC), and nowadays popular Convolutional Neural Networks (CNN). In this work, we focus on optimization for image segmentation. Techniques like MRF, KC, and CNN optimize MRF energies, KC criteria, or CNN losses respectively, and their corresponding optimization is very different. We are interested in the synergy and the complementary benefits of MRF, KC, and CNN for interactive segmentation and semantic segmentation. Our first contribution is pseudo-bound optimization for binary MRF energies that are high-order or non-submodular. Secondly, we propose Kernel Cut, a novel formulation for segmentation, which combines MRF regularization with Kernel Clustering. We show why to combine KC with MRF and how to optimize the joint objective. In the third part, we discuss how deep CNN segmentation can benefit from non-deep (i.e., shallow) methods like MRF and KC. In particular, we propose regularized losses for weakly-supervised CNN segmentation, in which we can integrate MRF energy or KC criteria as part of the losses. Minimization of regularized losses is a principled approach to semi-supervised learning, in general. Our regularized loss method is very simple and allows different kinds of regularization losses for CNN segmentation. We also study the optimization of regularized losses beyond gradient descent. Our regularized losses approach achieves state-of-the-art accuracy in semantic segmentation with near full supervision quality
Active Robust Optimization - Optimizing for Robustness of Changeable Products
To succeed in a demanding and competitive market, great attention needs to be given to the process of product design. Incorporating optimization into the process enables the designer to find high-quality products according to their simulated performance. However, the actual performance may differ from the simulation results due to a variety of uncertainty factors. Robust optimization is commonly used to search for products that are less affected by the anticipated uncertainties. Changeability can improve the robustness of a product, as it allows the product to be adapted to a new configuration whenever the uncertain conditions change. This ability provides the changeable product with an active form of robustness.
Several methodologies exist for engineering design of changeable products, none of which includes optimization. This study presents the Active Robust Optimization (ARO) framework that offers the missing tools for optimizing changeable products. A new optimization problem is formulated, named Active Robust Optimization Problem (AROP). The benefit in designing solutions by solving an AROP lies in the realistic manner adaptation is considered when assessing the solutions' performance.
The novel methodology can be applied to optimize any product that can be classified as a changeable product, i.e., it can be adjusted by its user during normal operation. This definition applies to a huge variety of applications, ranging from simple products such as fans and heaters, to complex systems such as production halls and transportation systems.
The ARO framework is described in this dissertation and its unique features are studied. Its ability to find robust changeable solutions is examined for different sources of uncertainty, robustness criteria and sampling conditions.
Additionally, a framework for Active Robust Multi-objective Optimization is developed. This generalisation of ARO itself presents many challenges, not encountered in previous studies. Novel approaches for evaluating and comparing changeable designs comprising multiple objectives are proposed along with algorithms for solving multi-objective AROPs.
The framework and associated methodologies are demonstrated on two applications from different fields in engineering design. The first is an adjustable optical table, and the second is the selection of gears in a gearbox
Bicriterial relocation of the Viennese ambulance service
PRINTAUSGABE: 1 CD als Beilage! --
In der Organisation von Rettungsdiensten ergeben sich immer wieder konkurrierende Ziele. Neben einer möglichst hohen Verfügbarkeit der Rettungsfahrzeuge müssen bei der Aufstellung der Rettungsflotte auch andere Aspekte berücksichtigt werden. Zur Lösung dieses Problems wird in dieser Arbeit eine dynamische Reallokations-Strategie für Rettungs Fahrzeuge entwickelt. Eines der Ziele ist die Anzahl der innerhalb eines vorgegebenen Zeitraums erreichbaren Wohnbevölkerung zu maximieren. Für eine bestimmte Anzahl an Fahrzeugen kann eine gute Aufstellung relativ einfach gefunden werden, jedoch verändert sich die Anzahl der Fahrzeuge jedes Mal, wenn ein Fahrzeug zu Einsatz kommt. In diesem Fall müssen die Fahrzeuge möglicherweise neu verteilt werden, um eine maximale Abdeckung zu erreichen. Zu viele Fahrten allein zur Neustrukturierung sind den Fahrzeugbesatzungen allerdings nicht zuzumuten. Deshalb soll in einem zweiten Ziel die Anzahl der Neuverteilungen minimiert werden. Die dynamische Neuverteilung wird durch einen a priori Ansatz gelöst, bei dem im Vorhinein für alle möglichen Zustände eine Lösung errechnet wird. Wenn sich der Zustand ändert, kann die jeweilige Lösung angewandt werden. Der hohe Rechenaufwand dieser Methode wird mit Hilfe der Pareto Ant Colony Optimization bewältigt. PACO ist eine auf Mehrzieloptimierung spezialisierte Metaheuristik, dessen Prinzip durch das Verhalten von Ameisen bei der Futtersuche inspiriert ist. Unterschiedliche Algorithmus Varianten werden entwickelt und in MatLab programmiert um das Potential des Algorithmus zu erforschen. Variationen der Peromonstrukturen, unterschiedlich große Lösungsräume und ein Ansatz mit veränderlicher Ameisenanzahl werden auf ihren Einfluss auf den Algorithmus getestet. Der Algorithmus liefert gleich mehrere Lösungen als Ergebnis. Da mehrere Zielvorgaben bestehen ist es nicht möglich diese Lösungs-Sets objektiv zu reihen. Es können jedoch Kennzahlen errechnet werden, um bestimmte Aspekte der Lösungsgüte zu beschreiben. In dieser Arbeit kommen der Anteil der gefundenen Lösungen, die durchschnittliche Distanz und die Hypervolume Metrik zur Anwendung. Ist die pareto optimale Front bekannt, kann die Qualität der Lösungs-Sets genauer bewertet werden. Deshalb werden die Algorithmen zunächst an einer Probleminstanz getestet, die klein genug ist alle Lösungen zu errechnen, bevor sie benutzt werden, um eine Relokalisierungs-Strategie für das NEF-System (Notarzt Einsatz Fahrzeug) der Wiener Rettung zu entwickeln.The organization of emergency medical services is characterised by competing objectives. Besides low response times of the ambulances other aspects have to be considered as well. To address this problem a dynamic relocation strategy for ambulance vehicles is established in this work. One objective is to maximize the resident population reachable within a given time frame. This can be done relatively easy for a set number of vehicles, but whenever a vehicle is dispatched to a call the number of available vehicles changes and relocations may be necessary to maximize the population coverage. However too many relocations would be unreasonable for the ambulance crews. Therefore the second objective is to minimize the number of relocations. The problem of dynamically relocating the vehicles is resolved by an a priori approach which solves all possible states in advance. Each time the number of vehicles changes the appropriate precalculated solution is applied. The high computational complexity of this strategy is met by Pareto Ant Colony Optimization. PACO is a specialized metaheuristic for multiobjective optimisation problems inspired by the foraging behaviour of real ants. Different algorithm versions are established and programmed in MatLab in order to explore the capabilities of the algorithm. Variations in the pheromone structures and size of the solution space, as well as approaches using shifting ant numbers are tested for their influence on the convergence behaviour of the algorithm. Multiple solutions are the outcome of the algorithm. These sets of solutions are called approximation sets, as they are an approximation of the pareto optimal front. The presence of several optimization criteria prevents an objective rating of the approximation sets, but a combination of unary measures can be used to assess certain quality aspects. In this work the found solutions ratio, the average distance and the hypervolume metric are implemented. The knowledge of the real pareto optimal front allows for a far better evaluation of the performance. Therefore the algorithms are tested on a problem instance small enough to completely enumerate all solutions, before they are used to develop a relocation strategy for the NEF-system (Notarzt Einsatz Fahrzeug) of the Viennese ambulance service
Complexity Theory for Discrete Black-Box Optimization Heuristics
A predominant topic in the theory of evolutionary algorithms and, more
generally, theory of randomized black-box optimization techniques is running
time analysis. Running time analysis aims at understanding the performance of a
given heuristic on a given problem by bounding the number of function
evaluations that are needed by the heuristic to identify a solution of a
desired quality. As in general algorithms theory, this running time perspective
is most useful when it is complemented by a meaningful complexity theory that
studies the limits of algorithmic solutions.
In the context of discrete black-box optimization, several black-box
complexity models have been developed to analyze the best possible performance
that a black-box optimization algorithm can achieve on a given problem. The
models differ in the classes of algorithms to which these lower bounds apply.
This way, black-box complexity contributes to a better understanding of how
certain algorithmic choices (such as the amount of memory used by a heuristic,
its selective pressure, or properties of the strategies that it uses to create
new solution candidates) influences performance.
In this chapter we review the different black-box complexity models that have
been proposed in the literature, survey the bounds that have been obtained for
these models, and discuss how the interplay of running time analysis and
black-box complexity can inspire new algorithmic solutions to well-researched
problems in evolutionary computation. We also discuss in this chapter several
interesting open questions for future work.Comment: This survey article is to appear (in a slightly modified form) in the
book "Theory of Randomized Search Heuristics in Discrete Search Spaces",
which will be published by Springer in 2018. The book is edited by Benjamin
Doerr and Frank Neumann. Missing numbers of pointers to other chapters of
this book will be added as soon as possibl
Recommended from our members
Evolutionary many-objective optimisation: pushing the boundaries
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonMany-objective optimisation poses great challenges to evolutionary algorithms. To start with, the ineffectiveness of the Pareto dominance relation, which is the most important criterion in multi-objective optimisation, results in the underperformance of traditional Pareto-based algorithms. Also, the aggravation of the conflict between proximity and diversity, along with increasing time or space requirement as well as parameter sensitivity, has become key barriers to the design of effective many-objective optimisation algorithms. Furthermore, the infeasibility of solutions' direct observation can lead to serious difficulties in algorithms' performance investigation and comparison. In this thesis, we address these challenges, aiming to make evolutionary algorithms as effective in many-objective optimisation as in two- or three-objective optimisation. First, we significantly enhance Pareto-based algorithms to make them suitable for many-objective optimisation by placing individuals with poor proximity into crowded regions so that these individuals can have a better chance to be eliminated. Second, we propose a grid-based evolutionary algorithm which explores the potential of the grid to deal with many-objective optimisation problems. Third, we present a bi-goal evolution framework that converts many objectives of a given problem into two objectives regarding proximity and diversity, thus creating an optimisation problem in which the objectives are the goals of the search process itself. Fourth, we propose a comprehensive performance indicator to compare evolutionary algorithms in optimisation problems with various Pareto front shapes and any objective dimensionality. Finally, we construct a test problem to aid the visual investigation of evolutionary search, with its Pareto optimal solutions in a two-dimensional decision space having similar distribution to their images in a higher-dimensional objective space. The work reported in this thesis is the outcome of innovative attempts at addressing some of the most challenging problems in evolutionary many-objective optimisation. This research has not only made some of the existing approaches, such as Pareto-based or grid-based algorithms that were traditionally regarded as unsuitable, now effective for many-objective optimisation, but also pushed other important boundaries with novel ideas including bi-goal evolution, a comprehensive performance indicator and a test problem for visual investigation. All the proposed algorithms have been systematically evaluated against existing state of the arts, and some of these algorithms have already been taken up by researchers and practitioners in the field.Department of Computer Science, Brunel University Londo
- …