502 research outputs found

    Methods for many-objective optimization: an analysis

    Get PDF
    Decomposition-based methods are often cited as the solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods

    Multi agent collaborative search based on Tchebycheff decomposition

    Get PDF
    This paper presents a novel formulation of Multi Agent Collaborative Search, for multi-objective optimization, based on Tchebycheff decomposition. A population of agents combines heuristics that aim at exploring the search space both globally (social moves) and in a neighborhood of each agent (individualistic moves). In this novel formulation the selection process is based on a combination of Tchebycheff scalarization and Pareto dominance. Furthermore, while in the previous implementation, social actions were applied to the whole population of agents and individualistic actions only to an elite sub-population, in this novel formulation this mechanism is inverted. The novel agent-based algorithm is tested at first on a standard benchmark of difficult problems and then on two specific problems in space trajectory design. Its performance is compared against a number of state-of-the-art multi objective optimization algorithms. The results demonstrate that this novel agent-based search has better performance with respect to its predecessor in a number of cases and converges better than the other state-of-the-art algorithms with a better spreading of the solutions

    A Preference-guided Multiobjective Evolutionary Algorithm based on Decomposition

    Get PDF
    Multiobjective evolutionary algorithms based on decomposition (MOEA/Ds) represent a class of widely employed problem solvers for multicriteria optimization problems. In this work we investigate the adaptation of these methods for incorporating preference information prior to the optimization, so that the search process can be biased towards a Pareto-optimal region that better satisfies the aspirations of a decision-making entity. The incorporation of the Preference-based Adaptive Region-of-interest (PAR) framework into the MOEA/D requires only the modification of the reference points used within the scalarization function, which in principle allows a straightforward use in more sophisticated versions of the base algorithm. Experimental results using the UF benchmark set suggest gains in diversity within the region of interest, without significant losses in convergence

    Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework

    Full text link
    Multi-objective reinforcement learning (MORL) extends traditional RL by seeking policies making different compromises among conflicting objectives. The recent surge of interest in MORL has led to diverse studies and solving methods, often drawing from existing knowledge in multi-objective optimization based on decomposition (MOO/D). Yet, a clear categorization based on both RL and MOO/D is lacking in the existing literature. Consequently, MORL researchers face difficulties when trying to classify contributions within a broader context due to the absence of a standardized taxonomy. To tackle such an issue, this paper introduces multi-objective reinforcement learning based on decomposition (MORL/D), a novel methodology bridging the literature of RL and MOO. A comprehensive taxonomy for MORL/D is presented, providing a structured foundation for categorizing existing and potential MORL works. The introduced taxonomy is then used to scrutinize MORL research, enhancing clarity and conciseness through well-defined categorization. Moreover, a flexible framework derived from the taxonomy is introduced. This framework accommodates diverse instantiations using tools from both RL and MOO/D. Its versatility is demonstrated by implementing it in different configurations and assessing it on contrasting benchmark problems. Results indicate MORL/D instantiations achieve comparable performance to current state-of-the-art approaches on the studied problems. By presenting the taxonomy and framework, this paper offers a comprehensive perspective and a unified vocabulary for MORL. This not only facilitates the identification of algorithmic contributions but also lays the groundwork for novel research avenues in MORL.Comment: Accepted at JAI

    Evolutionary Many-objective Optimization of Hybrid Electric Vehicle Control: From General Optimization to Preference Articulation

    Get PDF
    Many real-world optimization problems have more than three objectives, which has triggered increasing research interest in developing efficient and effective evolutionary algorithms for solving many-objective optimization problems. However, most many-objective evolutionary algorithms have only been evaluated on benchmark test functions and few applied to real-world optimization problems. To move a step forward, this paper presents a case study of solving a many-objective hybrid electric vehicle controller design problem using three state-of-the-art algorithms, namely, a decomposition based evolutionary algorithm (MOEA/D), a non-dominated sorting based genetic algorithm (NSGA-III), and a reference vector guided evolutionary algorithm (RVEA). We start with a typical setting aiming at approximating the Pareto front without introducing any user preferences. Based on the analyses of the approximated Pareto front, we introduce a preference articulation method and embed it in the three evolutionary algorithms for identifying solutions that the decision-maker prefers. Our experimental results demonstrate that by incorporating user preferences into many-objective evolutionary algorithms, we are not only able to gain deep insight into the trade-off relationships between the objectives, but also to achieve high-quality solutions reflecting the decision-maker’s preferences. In addition, our experimental results indicate that each of the three algorithms examined in this work has its unique advantages that can be exploited when applied to the optimization of real-world problems

    Parallel Multi-Objective Hyperparameter Optimization with Uniform Normalization and Bounded Objectives

    Full text link
    Machine learning (ML) methods offer a wide range of configurable hyperparameters that have a significant influence on their performance. While accuracy is a commonly used performance objective, in many settings, it is not sufficient. Optimizing the ML models with respect to multiple objectives such as accuracy, confidence, fairness, calibration, privacy, latency, and memory consumption is becoming crucial. To that end, hyperparameter optimization, the approach to systematically optimize the hyperparameters, which is already challenging for a single objective, is even more challenging for multiple objectives. In addition, the differences in objective scales, the failures, and the presence of outlier values in objectives make the problem even harder. We propose a multi-objective Bayesian optimization (MoBO) algorithm that addresses these problems through uniform objective normalization and randomized weights in scalarization. We increase the efficiency of our approach by imposing constraints on the objective to avoid exploring unnecessary configurations (e.g., insufficient accuracy). Finally, we leverage an approach to parallelize the MoBO which results in a 5x speed-up when using 16x more workers.Comment: Preprint with appendice

    Revisiting Norm Optimization for Multi-Objective Black-Box Problems: A Finite-Time Analysis

    Full text link
    The complexity of Pareto fronts imposes a great challenge on the convergence analysis of multi-objective optimization methods. While most theoretical convergence studies have addressed finite-set and/or discrete problems, others have provided probabilistic guarantees, assumed a total order on the solutions, or studied their asymptotic behaviour. In this paper, we revisit the Tchebycheff weighted method in a hierarchical bandits setting and provide a finite-time bound on the Pareto-compliant additive ϵ\epsilon-indicator. To the best of our knowledge, this paper is one of few that establish a link between weighted sum methods and quality indicators in finite time.Comment: submitted to Journal of Global Optimization. This article's notation and terminology is based on arXiv:1612.0841

    Scalarized Preferences in Multi-objective Optimization

    Get PDF
    Multikriterielle Optimierungsprobleme verfügen über keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den Präferenzen des Entscheiders genügen, der den Kompromiss implementiert. Skalarisierung – die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl – identifiziert eine einzige Lösung als globales Präferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusätzlichen Informationen über andere Kompromisslösungen, die die Präferenzen des Entscheiders bezüglich des globalen Optimums verändern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter Präferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der Präferenzen als Problemtransformationen charakterisiert, die präferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wünschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfüllen. Die algorithmische Analyse kennzeichnet Präferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. Für beide Paradigmen werden Algorithmen entworfen, die skalarisierte Präferenzeninformationen verwenden: Präferenzen-verzerrte Paretofrontapproximationen verteilen Punkte über die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale Präferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden für die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die Nützlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin Fahrpläne für ein Blockheizkraftwerk zu finden, die die erzeugte Elektrizität und Wärme maximieren und den Kraftstoffverbrauch minimiert. Präferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswählen kann, indem er die Konflikte zwischen den drei Zielen abwägt. Das zweite Problem beschäftigt sich mit der Erstellung von Fahrplänen für Geräte in einem Wohngebäude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima Fahrpläne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den Präferenzen der Entscheider übereinstimmen
    • …
    corecore