29 research outputs found
Tackling neural architecture search with quality diversity optimization
Neural architecture search (NAS) has been studied extensively and has grown to become a research field with substantial impact. While classical single-objective NAS searches for the architecture with the best performance, multi-objective NAS considers multiple objectives that should be optimized simultaneously, e.g., minimizing resource usage along the validation error. Although considerable progress has been made in the field of multiobjective NAS, we argue that there is some discrepancy between the actual optimization problem of practical interest and the optimization problem that multi-objective NAS tries to solve. We resolve this discrepancy by formulating the multi-objective NAS problem as a quality diversity optimization (QDO) problem and introduce three quality diversity NAS optimizers (two of them belonging to the group of multifidelity optimizers), which search for high-performing yet diverse architectures that are optimal for application-specific niches, e.g., hardware constraints. By comparing these optimizers to their multi-objective counterparts, we demonstrate that quality diversity NAS in general outperforms multiobjective NAS with respect to quality of solutions and efficiency. We further show how applications and future NAS research can thrive on QDO
Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variables
Complex engineering design problems, such as those involved in aerospace,
civil, or energy engineering, require the use of numerically costly simulation
codes in order to predict the behavior and performance of the system to be
designed. To perform the design of the systems, these codes are often embedded
into an optimization process to provide the best design while satisfying the
design constraints. Recently, new approaches, called Quality-Diversity, have
been proposed in order to enhance the exploration of the design space and to
provide a set of optimal diversified solutions with respect to some feature
functions. These functions are interesting to assess trade-offs. Furthermore,
complex engineering design problems often involve mixed continuous, discrete,
and categorical design variables allowing to take into account technological
choices in the optimization problem. In this paper, a new Quality-Diversity
methodology based on mixed continuous, discrete and categorical Bayesian
optimization strategy is proposed. This approach allows to reduce the
computational cost with respect to classical Quality - Diversity approaches
while dealing with discrete choices and constraints. The performance of the
proposed method is assessed on a benchmark of analytical problems as well as on
an industrial design optimization problem dealing with aerospace systems
Democratizing machine learning
Modelle des maschinellen Lernens sind zunehmend in der Gesellschaft verankert, oft in Form von automatisierten Entscheidungsprozessen. Ein wesentlicher Grund dafür ist die verbesserte Zugänglichkeit von Daten, aber auch von Toolkits für maschinelles Lernen, die den Zugang zu Methoden des maschinellen Lernens für Nicht-Experten ermöglichen.
Diese Arbeit umfasst mehrere Beiträge zur Demokratisierung des Zugangs zum maschinellem Lernen, mit dem Ziel, einem breiterem Publikum Zugang zu diesen Technologien zu er- möglichen. Die Beiträge in diesem Manuskript stammen aus mehreren Bereichen innerhalb dieses weiten Gebiets. Ein großer Teil ist dem Bereich des automatisierten maschinellen Lernens (AutoML) und der Hyperparameter-Optimierung gewidmet, mit dem Ziel, die oft mühsame Aufgabe, ein optimales Vorhersagemodell für einen gegebenen Datensatz zu finden, zu vereinfachen. Dieser Prozess besteht meist darin ein für vom Benutzer vorgegebene Leistungsmetrik(en) optimales Modell zu finden. Oft kann dieser Prozess durch Lernen aus vorhergehenden Experimenten verbessert oder beschleunigt werden.
In dieser Arbeit werden drei solcher Methoden vorgestellt, die entweder darauf abzielen, eine feste Menge möglicher Hyperparameterkonfigurationen zu erhalten, die wahrscheinlich gute Lösungen für jeden neuen Datensatz enthalten, oder Eigenschaften der Datensätze zu nutzen, um neue Konfigurationen vorzuschlagen.
Darüber hinaus wird eine Sammlung solcher erforderlichen Metadaten zu den Experimenten vorgestellt, und es wird gezeigt, wie solche Metadaten für die Entwicklung und als Testumgebung für neue Hyperparameter- Optimierungsmethoden verwendet werden können. Die weite Verbreitung von ML-Modellen in vielen Bereichen der Gesellschaft erfordert gleichzeitig eine genauere Untersuchung der Art und Weise, wie aus Modellen abgeleitete automatisierte Entscheidungen die Gesellschaft formen, und ob sie möglicherweise Individuen oder einzelne Bevölkerungsgruppen benachteiligen. In dieser Arbeit wird daher ein AutoML-Tool vorgestellt, das es ermöglicht, solche Überlegungen in die Suche nach einem optimalen Modell miteinzubeziehen. Diese Forderung nach Fairness wirft gleichzeitig die Frage auf, ob die Fairness eines Modells zuverlässig geschätzt werden kann, was in einem weiteren Beitrag in dieser Arbeit untersucht wird. Da der Zugang zu Methoden des maschinellen Lernens auch stark vom Zugang zu Software und Toolboxen abhängt, sind mehrere Beiträge in Form von Software Teil dieser Arbeit. Das R-Paket mlr3pipelines ermöglicht die Einbettung von Modellen in sogenan- nte Machine Learning Pipelines, die Vor- und Nachverarbeitungsschritte enthalten, die im maschinellen Lernen und AutoML häufig benötigt werden. Das mlr3fairness R-Paket hingegen ermöglicht es dem Benutzer, Modelle auf potentielle Benachteiligung hin zu über- prüfen und diese durch verschiedene Techniken zu reduzieren. Eine dieser Techniken, multi-calibration wurde darüberhinaus als seperate Software veröffentlicht.Machine learning artifacts are increasingly embedded in society, often in the form of automated decision-making processes. One major reason for this, along with methodological improvements, is the increasing accessibility of data but also machine learning toolkits that enable access to machine learning methodology for non-experts. The core focus of this thesis is exactly this – democratizing access to machine learning in order to enable a wider audience to benefit from its potential.
Contributions in this manuscript stem from several different areas within this broader area. A major section is dedicated to the field of automated machine learning (AutoML) with the goal to abstract away the tedious task of obtaining an optimal predictive model for a given dataset. This process mostly consists of finding said optimal model, often through hyperparameter optimization, while the user in turn only selects the appropriate performance metric(s) and validates the resulting models. This process can be improved or sped up by learning from previous experiments.
Three such methods one with the goal to obtain a fixed set of possible hyperparameter configurations that likely contain good solutions for any new dataset and two using dataset characteristics to propose new configurations are presented in this thesis.
It furthermore presents a collection of required experiment metadata and how such meta-data can be used for the development and as a test bed for new hyperparameter optimization methods. The pervasion of models derived from ML in many aspects of society simultaneously calls for increased scrutiny with respect to how such models shape society and the eventual biases they exhibit. Therefore, this thesis presents an AutoML tool that allows incorporating fairness considerations into the search for an optimal model. This requirement for fairness simultaneously poses the question of whether we can reliably estimate a model’s fairness, which is studied in a further contribution in this thesis. Since access to machine learning methods also heavily depends on access to software and toolboxes, several contributions in the form of software are part of this thesis. The mlr3pipelines R package allows for embedding models in so-called machine learning pipelines that include pre- and postprocessing steps often required in machine learning and AutoML. The mlr3fairness R package on the other hand enables users to audit models for potential biases as well as reduce those biases through different debiasing techniques. One such technique, multi-calibration is published as a separate software package, mcboost
Quality-diversity optimization: a novel branch of stochastic optimization
Traditional optimization algorithms search for a single global optimum that maximizes (or minimizes) the objective function. Multimodal optimization algorithms search for the highest peaks in the search space that can be more than one. Quality-Diversity algorithms are a recent addition to the evolutionary computation toolbox that do not only search for a single set of local optima, but instead try to illuminate the search space. In effect, they provide a holistic view of how high-performing solutions are distributed throughout a search space. The main differences with multimodal optimization algorithms are that (1) Quality-Diversity typically works in the behavioral space (or feature space), and not in the genotypic (or parameter) space, and (2) Quality-Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In this chapter, we provide a gentle introduction to Quality-Diversity optimization, discuss the main representative algorithms, and the main current topics under consideration in the community. Throughout the chapter, we also discuss several successful applications of Quality-Diversity algorithms, including deep learning, robotics, and reinforcement learning
Quality-diversity optimization: a novel branch of stochastic optimization
Traditional optimization algorithms search for a single global optimum that maximizes (or minimizes) the objective function. Multimodal optimization algorithms search for the highest peaks in the search space that can be more than one. Quality-Diversity algorithms are a recent addition to the evolutionary computation toolbox that do not only search for a single set of local optima, but instead try to illuminate the search space. In effect, they provide a holistic view of how high-performing solutions are distributed throughout a search space. The main differences with multimodal optimization algorithms are that (1) Quality-Diversity typically works in the behavioral space (or feature space), and not in the genotypic (or parameter) space, and (2) Quality-Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In this chapter, we provide a gentle introduction to Quality-Diversity optimization, discuss the main representative algorithms, and the main current topics under consideration in the community. Throughout the chapter, we also discuss several successful applications of Quality-Diversity algorithms, including deep learning, robotics, and reinforcement learning
Discovering Many Diverse Solutions with Bayesian Optimization
Bayesian optimization (BO) is a popular approach for sample-efficient
optimization of black-box objective functions. While BO has been successfully
applied to a wide range of scientific applications, traditional approaches to
single-objective BO only seek to find a single best solution. This can be a
significant limitation in situations where solutions may later turn out to be
intractable. For example, a designed molecule may turn out to violate
constraints that can only be reasonably evaluated after the optimization
process has concluded. To address this issue, we propose Rank-Ordered Bayesian
Optimization with Trust-regions (ROBOT) which aims to find a portfolio of
high-performing solutions that are diverse according to a user-specified
diversity metric. We evaluate ROBOT on several real-world applications and show
that it can discover large sets of high-performing diverse solutions while
requiring few additional function evaluations compared to finding a single best
solution
Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning
Training generally capable agents that perform well in unseen dynamic
environments is a long-term goal of robot learning. Quality Diversity
Reinforcement Learning (QD-RL) is an emerging class of reinforcement learning
(RL) algorithms that blend insights from Quality Diversity (QD) and RL to
produce a collection of high performing and behaviorally diverse policies with
respect to a behavioral embedding. Existing QD-RL approaches have thus far
taken advantage of sample-efficient off-policy RL algorithms. However, recent
advances in high-throughput, massively parallelized robotic simulators have
opened the door for algorithms that can take advantage of such parallelism, and
it is unclear how to scale existing off-policy QD-RL methods to these new
data-rich regimes. In this work, we take the first steps to combine on-policy
RL methods, specifically Proximal Policy Optimization (PPO), that can leverage
massive parallelism, with QD, and propose a new QD-RL method with these
high-throughput simulators and on-policy training in mind. Our proposed
Proximal Policy Gradient Arborescence (PPGA) algorithm yields a 4x improvement
over baselines on the challenging humanoid domain.Comment: Submitted to Neurips 202
Quality-diversity in dissimilarity spaces
The theory of magnitude provides a mathematical framework for quantifying and
maximizing diversity. We apply this framework to formulate quality-diversity
algorithms in generic dissimilarity spaces. In particular, we instantiate and
demonstrate a very general version of Go-Explore with promising performance.Comment: Minor bug fix: see new appendix J for details. Only small
quantitative effects; no significant changes to results (but all redone