23,321 research outputs found
Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning
Hyperparameter optimization (HPO) is important to leverage the full potential
of machine learning (ML). In practice, users are often interested in
multi-objective (MO) problems, i.e., optimizing potentially conflicting
objectives, like accuracy and energy consumption. To tackle this, the vast
majority of MO-ML algorithms return a Pareto front of non-dominated machine
learning models to the user. Optimizing the hyperparameters of such algorithms
is non-trivial as evaluating a hyperparameter configuration entails evaluating
the quality of the resulting Pareto front. In literature, there are known
indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by
quantifying different properties (e.g., volume, proximity to a reference
point). However, choosing the indicator that leads to the desired Pareto front
might be a hard task for a user. In this paper, we propose a human-centered
interactive HPO approach tailored towards multi-objective ML leveraging
preference learning to extract desiderata from users that guide the
optimization. Instead of relying on the user guessing the most suitable
indicator for their needs, our approach automatically learns an appropriate
indicator. Concretely, we leverage pairwise comparisons of distinct Pareto
fronts to learn such an appropriate quality indicator. Then, we optimize the
hyperparameters of the underlying MO-ML algorithm towards this learned
indicator using a state-of-the-art HPO approach. In an experimental study
targeting the environmental impact of ML, we demonstrate that our approach
leads to substantially better Pareto fronts compared to optimizing based on a
wrong indicator pre-selected by the user, and performs comparable in the case
of an advanced user knowing which indicator to pick
Multi-Objective GFlowNets
We study the problem of generating diverse candidates in the context of
Multi-Objective Optimization. In many applications of machine learning such as
drug discovery and material design, the goal is to generate candidates which
simultaneously optimize a set of potentially conflicting objectives. Moreover,
these objectives are often imperfect evaluations of some underlying property of
interest, making it important to generate diverse candidates to have multiple
options for expensive downstream evaluations. We propose Multi-Objective
GFlowNets (MOGFNs), a novel method for generating diverse Pareto optimal
solutions, based on GFlowNets. We introduce two variants of MOGFNs: MOGFN-PC,
which models a family of independent sub-problems defined by a scalarization
function, with reward-conditional GFlowNets, and MOGFN-AL, which solves a
sequence of sub-problems defined by an acquisition function in an active
learning loop. Our experiments on wide variety of synthetic and benchmark tasks
demonstrate advantages of the proposed methods in terms of the Pareto
performance and importantly, improved candidate diversity, which is the main
contribution of this work.Comment: 23 pages, 8 figures. ICML 2023. Code at:
https://github.com/GFNOrg/multi-objective-gf
Feature learning in feature-sample networks using multi-objective optimization
Data and knowledge representation are fundamental concepts in machine
learning. The quality of the representation impacts the performance of the
learning model directly. Feature learning transforms or enhances raw data to
structures that are effectively exploited by those models. In recent years,
several works have been using complex networks for data representation and
analysis. However, no feature learning method has been proposed for such
category of techniques. Here, we present an unsupervised feature learning
mechanism that works on datasets with binary features. First, the dataset is
mapped into a feature--sample network. Then, a multi-objective optimization
process selects a set of new vertices to produce an enhanced version of the
network. The new features depend on a nonlinear function of a combination of
preexisting features. Effectively, the process projects the input data into a
higher-dimensional space. To solve the optimization problem, we design two
metaheuristics based on the lexicographic genetic algorithm and the improved
strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced
network contains more information and can be exploited to improve the
performance of machine learning methods. The advantages and disadvantages of
each optimization strategy are discussed.Comment: 7 pages, 4 figure
Solving Dynamic Multi-objective Optimization Problems Using Incremental Support Vector Machine
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs)
is that optimization objective functions will change with times or
environments. One of the promising approaches for solving the DMOPs is reusing
the obtained Pareto optimal set (POS) to train prediction models via machine
learning approaches. In this paper, we train an Incremental Support Vector
Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP
we want to solve at the next moment are filtered through the trained ISVM
classifier. A high-quality initial population will be generated by the ISVM
classifier, and a variety of different types of population-based dynamic
multi-objective optimization algorithms can benefit from the population. To
verify this idea, we incorporate the proposed approach into three evolutionary
algorithms, the multi-objective particle swarm optimization(MOPSO),
Nondominated Sorting Genetic Algorithm II (NSGA-II), and the Regularity
Model-based multi-objective estimation of distribution algorithm(RE-MEDA). We
employ experiments to test these algorithms, and experimental results show the
effectiveness.Comment: 6 page
Multi-Objective Bayesian Optimization with Active Preference Learning
There are a lot of real-world black-box optimization problems that need to
optimize multiple criteria simultaneously. However, in a multi-objective
optimization (MOO) problem, identifying the whole Pareto front requires the
prohibitive search cost, while in many practical scenarios, the decision maker
(DM) only needs a specific solution among the set of the Pareto optimal
solutions. We propose a Bayesian optimization (BO) approach to identifying the
most preferred solution in the MOO with expensive objective functions, in which
a Bayesian preference model of the DM is adaptively estimated by an interactive
manner based on the two types of supervisions called the pairwise preference
and improvement request. To explore the most preferred solution, we define an
acquisition function in which the uncertainty both in the objective functions
and the DM preference is incorporated. Further, to minimize the interaction
cost with the DM, we also propose an active learning strategy for the
preference estimation. We empirically demonstrate the effectiveness of our
proposed method through the benchmark function optimization and the
hyper-parameter optimization problems for machine learning models
- …