2,470 research outputs found
Component-level study of a decomposition-based multi-objective optimizer on a limited evaluation budget
Decomposition-based algorithms have emerged as one of the most
popular classes of solvers for multi-objective optimization. Despite
their popularity, a lack of guidance exists for how to configure
such algorithms for real-world problems, based on the features or
contexts of those problems. One context that is important for many
real-world problems is that function evaluations are expensive, and
so algorithms need to be able to provide adequate convergence on
a limited budget (e.g. 500 evaluations). This study contributes to
emerging guidance on algorithm configuration by investigating
how the convergence of the popular decomposition-based optimizer
MOEA/D, over a limited budget, is affected by choice of component level
configuration. Two main aspects are considered: (1) impact
of sharing information; (2) impact of normalisation scheme. The
empirical test framework includes detailed trajectory analysis, as
well as more conventional performance indicator analysis, to help
identify and explain the behaviour of the optimizer. Use of neighbours
in generating new solutions is found to be highly disruptive
for searching on a small budget, leading to better convergence in
some areas but far worse convergence in others. The findings also
emphasise the challenge and importance of using an appropriate
normalisation scheme
Bandit-based cooperative coevolution for tackling contribution imbalance in large-scale optimization problems
This paper addresses the issue of computational resource allocation within the context of cooperative coevolution. Cooperative coevolution typically works by breaking a problem down into smaller subproblems (or components) and coevolving them in a round-robin fashion, resulting in a uniform resource allocation among its components. Despite its success on a wide range of problems, cooperative coevolution struggles to perform efficiently when its components do not contribute equally to the overall objective value. This is of crucial importance on large-scale optimization problems where such difference are further magnified. To resolve this imbalance problem, we extend the standard cooperative coevolution to a new generic framework capable of learning the contribution of each component using multi-armed bandit techniques. The new framework allocates the computational resources to each component proportional to their contributions towards improving the overall objective value. This approach results in a more economical use of the limited computational resources. We study different aspects of the proposed framework in the light of extensive experiments. Our empirical results confirm that even a simple bandit-based credit assignment scheme can significantly improve the performance of cooperative coevolution on large-scale continuous problems, leading to competitive performance as compared to the state-of-the-art algorithms
A Bayesian approach to constrained single- and multi-objective optimization
This article addresses the problem of derivative-free (single- or
multi-objective) optimization subject to multiple inequality constraints. Both
the objective and constraint functions are assumed to be smooth, non-linear and
expensive to evaluate. As a consequence, the number of evaluations that can be
used to carry out the optimization is very limited, as in complex industrial
design optimization problems. The method we propose to overcome this difficulty
has its roots in both the Bayesian and the multi-objective optimization
literatures. More specifically, an extended domination rule is used to handle
objectives and constraints in a unified way, and a corresponding expected
hyper-volume improvement sampling criterion is proposed. This new criterion is
naturally adapted to the search of a feasible point when none is available, and
reduces to existing Bayesian sampling criteria---the classical Expected
Improvement (EI) criterion and some of its constrained/multi-objective
extensions---as soon as at least one feasible point is available. The
calculation and optimization of the criterion are performed using Sequential
Monte Carlo techniques. In particular, an algorithm similar to the subset
simulation method, which is well known in the field of structural reliability,
is used to estimate the criterion. The method, which we call BMOO (for Bayesian
Multi-Objective Optimization), is compared to state-of-the-art algorithms for
single- and multi-objective constrained optimization
Contribution based multi-island competitive cooperative coevolution
Competition in cooperative coevolution (CC) has demonstrated success in solving global optimization problems. In a recent study, a multi-island competitive cooperative coevolution (MIC3) algorithm was introduced that featured competition and collaboration of several different problem decomposition strategies implemented as independent islands. It was shown that MIC3converges to high quality solutions without the need to find an optimal decomposition. MIC3splits the computational budget in terms of the number of function evaluations, equally amongst all the islands and evolves them in a round-robin fashion. This overlooks the difference in contributions of different islands towards improving the overall objective function value. Therefore, a considerable amount of function evaluations is wasted on the low-contributing islands as their problem decomposition strategies may not appeal to the problem at the given stage of the evolutionary process. This paper proposes contribution-based MIC3 algorithms (MIC4) that quantifies the contributions of each island and allocates the computational budget accordingly. The experimental analysis reveals that the proposed method outperforms its counterpart
Towards Better Integration of Surrogate Models and Optimizers
Surrogate-Assisted Evolutionary Algorithms (SAEAs) have been proven to be very effective in solving (synthetic and real-world) computationally expensive optimization problems with a limited number of function evaluations. The two main components of SAEAs are: the surrogate model and the evolutionary optimizer, both of which use parameters to control their respective behavior. These parameters are likely to interact closely, and hence the exploitation of any such relationships may lead to the design of an enhanced SAEA. In this chapter, as a first step, we focus on Kriging and the Efficient Global Optimization (EGO) framework. We discuss potentially profitable ways of a better integration of model and optimizer. Furthermore, we investigate in depth how different parameters of the model and the optimizer impact optimization results. In particular, we determine whether there are any interactions between these parameters, and how the problem characteristics impact optimization results. In the experimental study, we use the popular Black-Box Optimization Benchmarking (BBOB) testbed. Interestingly, the analysis finds no evidence for significant interactions between model and optimizer parameters, but independently their performance has a significant interaction with the objective function. Based on our results, we make recommendations on how best to configure EGO
An ADMM Based Framework for AutoML Pipeline Configuration
We study the AutoML problem of automatically configuring machine learning
pipelines by jointly selecting algorithms and their appropriate
hyper-parameters for all steps in supervised learning pipelines. This black-box
(gradient-free) optimization with mixed integer & continuous variables is a
challenging problem. We propose a novel AutoML scheme by leveraging the
alternating direction method of multipliers (ADMM). The proposed framework is
able to (i) decompose the optimization problem into easier sub-problems that
have a reduced number of variables and circumvent the challenge of mixed
variable categories, and (ii) incorporate black-box constraints along-side the
black-box optimization objective. We empirically evaluate the flexibility (in
utilizing existing AutoML techniques), effectiveness (against open source
AutoML toolkits),and unique capability (of executing AutoML with practically
motivated black-box constraints) of our proposed scheme on a collection of
binary classification data sets from UCI ML& OpenML repositories. We observe
that on an average our framework provides significant gains in comparison to
other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical
advantages of this framework
- …