93 research outputs found
Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems
Surrogate-assisted evolutionary algorithms have been widely developed to
solve complex and computationally expensive multi-objective optimization
problems in recent years. However, when dealing with high-dimensional
optimization problems, the performance of these surrogate-assisted
multi-objective evolutionary algorithms deteriorate drastically. In this work,
a novel Classifier-assisted rank-based learning and Local Model based
multi-objective Evolutionary Algorithm (CLMEA) is proposed for high-dimensional
expensive multi-objective optimization problems. The proposed algorithm
consists of three parts: classifier-assisted rank-based learning,
hypervolume-based non-dominated search, and local search in the relatively
sparse objective space. Specifically, a probabilistic neural network is built
as classifier to divide the offspring into a number of ranks. The offspring in
different ranks uses rank-based learning strategy to generate more promising
and informative candidates for real function evaluations. Then, radial basis
function networks are built as surrogates to approximate the objective
functions. After searching non-dominated solutions assisted by the surrogate
model, the candidates with higher hypervolume improvement are selected for real
evaluations. Subsequently, in order to maintain the diversity of solutions, the
most uncertain sample point from the non-dominated solutions measured by the
crowding distance is selected as the guided parent to further infill in the
uncertain region of the front. The experimental results of benchmark problems
and a real-world application on geothermal reservoir heat extraction
optimization demonstrate that the proposed algorithm shows superior performance
compared with the state-of-the-art surrogate-assisted multi-objective
evolutionary algorithms. The source code for this work is available at
https://github.com/JellyChen7/CLMEA
Boosting data-driven evolutionary algorithm with localized data generation
By efficiently building and exploiting surrogates, data-driven evolutionary algorithms (DDEAs) can be very helpful in solving expensive and computationally intensive problems. However, they still often suffer from two difficulties. First, many existing methods for building a single ad hoc surrogate are suitable for some special problems but may not work well on some other problems. Second, the optimization accuracy of DDEAs deteriorates if available data are not enough for building accurate surrogates, which is common in expensive optimization problems. To this end, this article proposes a novel DDEA with two efficient components. First, a boosting strategy (BS) is proposed for self-aware model managements, which can iteratively build and combine surrogates to obtain suitable surrogate models for different problems. Second, a localized data generation (LDG) method is proposed to generate synthetic data to alleviate data shortage and increase data quantity, which is achieved by approximating fitness through data positions. By integrating the BS and the LDG, the BDDEA-LDG algorithm is able to improve model accuracy and data quantity at the same time automatically according to the problems at hand. Besides, a tradeoff is empirically considered to strike a better balance between the effectiveness of surrogates and the time cost for building them. The experimental results show that the proposed BDDEA-LDG algorithm can generally outperform both traditional methods without surrogates and other state-of-the-art DDEA son widely used benchmarks and an arterial traffic signal timing real-world optimization problem. Furthermore, the proposed BDDEA-LDG algorithm can use only about 2% computational budgets of traditional methods for producing competitive results
Automatic surrogate model type selection during the optimization of expensive black-box problems
The use of Surrogate Based Optimization (SBO) has become commonplace for optimizing expensive black-box simulation codes. A popular SBO method is the Efficient Global Optimization (EGO) approach. However, the performance of SBO methods critically depends on the quality of the guiding surrogate. In EGO the surrogate type is usually fixed to Kriging even though this may not be optimal for all problems. In this paper the authors propose to extend the well-known EGO method with an automatic surrogate model type selection framework that is able to dynamically select the best model type (including hybrid ensembles) depending on the data available so far. Hence, the expected improvement criterion will always be based on the best approximation available at each step of the optimization process. The approach is demonstrated on a structural optimization problem, i.e., reducing the stress on a truss-like structure. Results show that the proposed algorithm consequently finds better optimums than traditional kriging-based infill optimization
A Random Forest Assisted Evolutionary Algorithm for Data-Driven Constrained Multi-Objective Combinatorial Optimization of Trauma Systems for publication
Many real-world optimization problems can be
solved by using the data-driven approach only, simply because no
analytic objective functions are available for evaluating candidate
solutions. In this work, we address a class of expensive datadriven
constrained multi-objective combinatorial optimization
problems, where the objectives and constraints can be calculated
only on the basis of large amount of data. To solve this class
of problems, we propose to use random forests and radial basis
function networks as surrogates to approximate both objective
and constraint functions. In addition, logistic regression models
are introduced to rectify the surrogate-assisted fitness evaluations
and a stochastic ranking selection is adopted to further reduce
the influences of the approximated constraint functions. Three
variants of the proposed algorithm are empirically evaluated on
multi-objective knapsack benchmark problems and two realworld
trauma system design problems. Experimental results
demonstrate that the variant using random forest models as
the surrogates are effective and efficient in solving data-driven
constrained multi-objective combinatorial optimization problems
Multi-surrogate Assisted Efficient Global Optimization for Discrete Problems
Decades of progress in simulation-based surrogate-assisted optimization and
unprecedented growth in computational power have enabled researchers and
practitioners to optimize previously intractable complex engineering problems.
This paper investigates the possible benefit of a concurrent utilization of
multiple simulation-based surrogate models to solve complex discrete
optimization problems. To fulfill this, the so-called Self-Adaptive
Multi-surrogate Assisted Efficient Global Optimization algorithm (SAMA-DiEGO),
which features a two-stage online model management strategy, is proposed and
further benchmarked on fifteen binary-encoded combinatorial and fifteen ordinal
problems against several state-of-the-art non-surrogate or single surrogate
assisted optimization algorithms. Our findings indicate that SAMA-DiEGO can
rapidly converge to better solutions on a majority of the test problems, which
shows the feasibility and advantage of using multiple surrogate models in
optimizing discrete problems
Machine learning into metaheuristics: A survey and taxonomy of data-driven metaheuristics
During the last years, research in applying machine learning (ML) to design efficient, effective and robust metaheuristics became increasingly popular. Many of those data driven metaheuristics have generated high quality results and represent state-of-the-art optimization algorithms. Although various appproaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this paper we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies which might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic which needs further in-depth investigations
A portfolio approach to massively parallel Bayesian optimization
One way to reduce the time of conducting optimization studies is to evaluate
designs in parallel rather than just one-at-a-time. For expensive-to-evaluate
black-boxes, batch versions of Bayesian optimization have been proposed. They
work by building a surrogate model of the black-box that can be used to select
the designs to evaluate efficiently via an infill criterion. Still, with higher
levels of parallelization becoming available, the strategies that work for a
few tens of parallel evaluations become limiting, in particular due to the
complexity of selecting more evaluations. It is even more crucial when the
black-box is noisy, necessitating more evaluations as well as repeating
experiments. Here we propose a scalable strategy that can keep up with massive
batching natively, focused on the exploration/exploitation trade-off and a
portfolio allocation. We compare the approach with related methods on
deterministic and noisy functions, for mono and multiobjective optimization
tasks. These experiments show similar or better performance than existing
methods, while being orders of magnitude faster
A Data-Driven Evolutionary Transfer Optimization for Expensive Problems in Dynamic Environments
Many real-world problems are usually computationally costly and the objective
functions evolve over time. Data-driven, a.k.a. surrogate-assisted,
evolutionary optimization has been recognized as an effective approach for
tackling expensive black-box optimization problems in a static environment
whereas it has rarely been studied under dynamic environments. This paper
proposes a simple but effective transfer learning framework to empower
data-driven evolutionary optimization to solve dynamic optimization problems.
Specifically, it applies a hierarchical multi-output Gaussian process to
capture the correlation between data collected from different time steps with a
linearly increased number of hyperparameters. Furthermore, an adaptive source
task selection along with a bespoke warm staring initialization mechanisms are
proposed to better leverage the knowledge extracted from previous optimization
exercises. By doing so, the data-driven evolutionary optimization can jump
start the optimization in the new environment with a strictly limited
computational budget. Experiments on synthetic benchmark test problems and a
real-world case study demonstrate the effectiveness of our proposed algorithm
against nine state-of-the-art peer algorithms
- …