306 research outputs found
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Constrained optimization of high-dimensional numerical problems plays an
important role in many scientific and industrial applications. Function
evaluations in many industrial applications are severely limited and no
analytical information about objective function and constraint functions is
available. For such expensive black-box optimization tasks, the constraint
optimization algorithm COBRA was proposed, making use of RBF surrogate modeling
for both the objective and the constraint functions. COBRA has shown remarkable
success in solving reliably complex benchmark problems in less than 500
function evaluations. Unfortunately, COBRA requires careful adjustment of
parameters in order to do so.
In this work we present a new self-adjusting algorithm SACOBRA, which is
based on COBRA and capable to achieve high-quality results with very few
function evaluations and no parameter tuning. It is shown with the help of
performance profiles on a set of benchmark problems (G-problems, MOPTA08) that
SACOBRA consistently outperforms any COBRA algorithm with fixed parameter
setting. We analyze the importance of the several new elements in SACOBRA and
find that each element of SACOBRA plays a role to boost up the overall
optimization performance. We discuss the reasons behind and get in this way a
better understanding of high-quality RBF surrogate modeling
Local Subspace-Based Outlier Detection using Global Neighbourhoods
Outlier detection in high-dimensional data is a challenging yet important
task, as it has applications in, e.g., fraud detection and quality control.
State-of-the-art density-based algorithms perform well because they 1) take the
local neighbourhoods of data points into account and 2) consider feature
subspaces. In highly complex and high-dimensional data, however, existing
methods are likely to overlook important outliers because they do not
explicitly take into account that the data is often a mixture distribution of
multiple components.
We therefore introduce GLOSS, an algorithm that performs local subspace
outlier detection using global neighbourhoods. Experiments on synthetic data
demonstrate that GLOSS more accurately detects local outliers in mixed data
than its competitors. Moreover, experiments on real-world data show that our
approach identifies relevant outliers overlooked by existing methods,
confirming that one should keep an eye on the global perspective even when
doing local outlier detection.Comment: Short version accepted at IEEE BigData 201
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
In the field of multi-objective optimization algorithms, multi-objective
Bayesian Global Optimization (MOBGO) is an important branch, in addition to
evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes
Gaussian Process models learned from previous objective function evaluations to
decide the next evaluation site by maximizing or minimizing an infill
criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement
(EHVI), which shows a good performance on a wide range of problems, with
respect to exploration and exploitation. However, so far it has been a
challenge to calculate exact EHVI values efficiently. In this paper, an
efficient algorithm for the computation of the exact EHVI for a generic case is
proposed. This efficient algorithm is based on partitioning the integration
volume into a set of axis-parallel slices. Theoretically, the upper bound time
complexities are improved from previously and , for two- and
three-objective problems respectively, to , which is
asymptotically optimal. This article generalizes the scheme in higher
dimensional case by utilizing a new hyperbox decomposition technique, which was
proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of
the multilayered integration scheme that scales linearly in the number of
hyperboxes of the decomposition. The speed comparison shows that the proposed
algorithm in this paper significantly reduces computation time. Finally, this
decomposition technique is applied in the calculation of the Probability of
Improvement (PoI)
Online Selection of CMA-ES Variants
In the field of evolutionary computation, one of the most challenging topics
is algorithm selection. Knowing which heuristics to use for which optimization
problem is key to obtaining high-quality solutions. We aim to extend this
research topic by taking a first step towards a selection method for adaptive
CMA-ES algorithms. We build upon the theoretical work done by van Rijn
\textit{et al.} [PPSN'18], in which the potential of switching between
different CMA-ES variants was quantified in the context of a modular CMA-ES
framework.
We demonstrate in this work that their proposed approach is not very
reliable, in that implementing the suggested adaptive configurations does not
yield the predicted performance gains. We propose a revised approach, which
results in a more robust fit between predicted and actual performance. The
adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested
functions of the BBOB benchmark, with stable advantages of up to 23\%. An
analysis of module activation indicates which modules are most crucial for the
different phases of optimizing each of the 24 benchmark problems. The module
activation also suggests that additional gains are possible when including the
(B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19) Appendix will be added in due tim
Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm Selection
A key challenge in the application of evolutionary algorithms in practice is
the selection of an algorithm instance that best suits the problem at hand.
What complicates this decision further is that different algorithms may be best
suited for different stages of the optimization process. Dynamic algorithm
selection and configuration are therefore well-researched topics in
evolutionary computation. However, while hyper-heuristics and parameter control
studies typically assume a setting in which the algorithm needs to be chosen
while running the algorithms, without prior information, AutoML approaches such
as hyper-parameter tuning and automated algorithm configuration assume the
possibility of evaluating different configurations before making a final
recommendation. In practice, however, we are often in a middle-ground between
these two settings, where we need to decide on the algorithm instance before
the run ("oneshot" setting), but where we have (possibly lots of) data
available on which we can base an informed decision.
We analyze in this work how such prior performance data can be used to infer
informed dynamic algorithm selection schemes for the solution of pseudo-Boolean
optimization problems. Our specific use-case considers a family of genetic
algorithms.Comment: Submitted for review to GECCO'2
A Decision Diagram Operation for Reachability
Saturation is considered the state-of-the-art method for computing fixpoints
with decision diagrams. We present a relatively simple decision diagram
operation called REACH that also computes fixpoints. In contrast to saturation,
it does not require a partitioning of the transition relation. We give
sequential algorithms implementing the new operation for both binary and
multi-valued decision diagrams, and moreover provide parallel counterparts. We
implement these algorithms and experimentally compare their performance against
saturation on 692 model checking benchmarks in different languages. The results
show that the REACH operation often outperforms saturation, especially on
transition relations with low locality. In a comparison between parallelized
versions of REACH and saturation we find that REACH obtains comparable speedups
up to 16 cores, although falls behind saturation at 64 cores. Finally, in a
comparison with the state-of-the-art model checking tool ITS-tools we find that
REACH outperforms ITS-tools on 29% of models, suggesting that REACH can be
useful as a complementary method in an ensemble tool
- âŠ