16,256 research outputs found
Online Selection of CMA-ES Variants
In the field of evolutionary computation, one of the most challenging topics
is algorithm selection. Knowing which heuristics to use for which optimization
problem is key to obtaining high-quality solutions. We aim to extend this
research topic by taking a first step towards a selection method for adaptive
CMA-ES algorithms. We build upon the theoretical work done by van Rijn
\textit{et al.} [PPSN'18], in which the potential of switching between
different CMA-ES variants was quantified in the context of a modular CMA-ES
framework.
We demonstrate in this work that their proposed approach is not very
reliable, in that implementing the suggested adaptive configurations does not
yield the predicted performance gains. We propose a revised approach, which
results in a more robust fit between predicted and actual performance. The
adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested
functions of the BBOB benchmark, with stable advantages of up to 23\%. An
analysis of module activation indicates which modules are most crucial for the
different phases of optimizing each of the 24 benchmark problems. The module
activation also suggests that additional gains are possible when including the
(B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19) Appendix will be added in due tim
Sequential vs. Integrated Algorithm Selection and Configuration: A Case Study for the Modular CMA-ES
When faced with a specific optimization problem, choosing which algorithm to
use is always a tough task. Not only is there a vast variety of algorithms to
select from, but these algorithms often are controlled by many hyperparameters,
which need to be tuned in order to achieve the best performance possible.
Usually, this problem is separated into two parts: algorithm selection and
algorithm configuration. With the significant advances made in Machine
Learning, however, these problems can be integrated into a combined algorithm
selection and hyperparameter optimization task, commonly known as the CASH
problem. In this work we compare sequential and integrated algorithm selection
and configuration approaches for the case of selecting and tuning the best out
of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy
(CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite. We first
show that the ranking of the modular CMA-ES variants depends to a large extent
on the quality of the hyperparameters. This implies that even a sequential
approach based on complete enumeration of the algorithm space will likely
result in sub-optimal solutions. In fact, we show that the integrated approach
manages to provide competitive results at a much smaller computational cost. We
also compare two different mixed-integer algorithm configuration techniques,
called irace and Mixed-Integer Parallel Efficient Global Optimization
(MIP-EGO). While we show that the two methods differ significantly in their
treatment of the exploration-exploitation balance, their overall performances
are very similar
The Hessian Estimation Evolution Strategy
We present a novel black box optimization algorithm called Hessian Estimation
Evolution Strategy. The algorithm updates the covariance matrix of its sampling
distribution by directly estimating the curvature of the objective function.
This algorithm design is targeted at twice continuously differentiable
problems. For this, we extend the cumulative step-size adaptation algorithm of
the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance
matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We
also show that the algorithm is surprisingly robust when its core assumption of
a twice continuously differentiable objective function is violated. The
approach yields a new evolution strategy with competitive performance, and at
the same time it also offers an interesting alternative to the usual covariance
matrix update mechanism
Analysis of Different Types of Regret in Continuous Noisy Optimization
The performance measure of an algorithm is a crucial part of its analysis.
The performance can be determined by the study on the convergence rate of the
algorithm in question. It is necessary to study some (hopefully convergent)
sequence that will measure how "good" is the approximated optimum compared to
the real optimum. The concept of Regret is widely used in the bandit literature
for assessing the performance of an algorithm. The same concept is also used in
the framework of optimization algorithms, sometimes under other names or
without a specific name. And the numerical evaluation of convergence rate of
noisy algorithms often involves approximations of regrets. We discuss here two
types of approximations of Simple Regret used in practice for the evaluation of
algorithms for noisy optimization. We use specific algorithms of different
nature and the noisy sphere function to show the following results. The
approximation of Simple Regret, termed here Approximate Simple Regret, used in
some optimization testbeds, fails to estimate the Simple Regret convergence
rate. We also discuss a recent new approximation of Simple Regret, that we term
Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016,
Denver, United States. 201
Comparing Mirrored Mutations and Active Covariance Matrix Adaptation in the IPOP-CMA-ES on the Noiseless BBOB Testbed
International audienceThis paper investigates two variants of the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Active covariance matrix adaptation allows for negative weights in the covariance matrix update rule such that "bad" steps are (actively) taken into account when updating the covariance matrix of the sample distribution. On the other hand, mirrored mutations via selective mirroring also take the "bad" steps into account. In this case, they are first evaluated when taken in the opposite direction (mirrored) and then considered for regular selection. In this study, we investigate the difference between the performance of the two variants empirically on the noiseless BBOB testbed. The CMA-ES with selectively mirrored mutations only outperforms the active CMA-ES on the sphere function while the active variant statistically significantly outperforms mirrored mutations on 10 of 24 functions in several dimensions
Variable Metric Random Pursuit
We consider unconstrained randomized optimization of smooth convex objective
functions in the gradient-free setting. We analyze Random Pursuit (RP)
algorithms with fixed (F-RP) and variable metric (V-RP). The algorithms only
use zeroth-order information about the objective function and compute an
approximate solution by repeated optimization over randomly chosen
one-dimensional subspaces. The distribution of search directions is dictated by
the chosen metric.
Variable Metric RP uses novel variants of a randomized zeroth-order Hessian
approximation scheme recently introduced by Leventhal and Lewis (D. Leventhal
and A. S. Lewis., Optimization 60(3), 329--245, 2011). We here present (i) a
refined analysis of the expected single step progress of RP algorithms and
their global convergence on (strictly) convex functions and (ii) novel
convergence bounds for V-RP on strongly convex functions. We also quantify how
well the employed metric needs to match the local geometry of the function in
order for the RP algorithms to converge with the best possible rate.
Our theoretical results are accompanied by numerical experiments, comparing
V-RP with the derivative-free schemes CMA-ES, Implicit Filtering, Nelder-Mead,
NEWUOA, Pattern-Search and Nesterov's gradient-free algorithms.Comment: 42 pages, 6 figures, 15 tables, submitted to journal, Version 3:
majorly revised second part, i.e. Section 5 and Appendi
Cardiac biomarkers by point-of-care testing - back to the future?
The measurement of the cardiac troponins (cTn), cardiac troponin T (cTnT) and cardiac troponin I (cTnI) are integral to the management of patients with suspected acute coronary syndromes (ACS). Patients without clear electrocardiographic evidence of myocardial infarction require measurement of cTnT or cTnI. It therefore follows that a rapid turnaround time (TAT) combined with the immediacy of results return which is achieved by point-of-care testing (POCT) offers a substantial clinical benefit. Rapid results return plus immediate decision-making should translate into improved patient flow and improved therapeutic decision-making. The development of high sensitivity troponin assays offer significant clinical advantages. Diagnostic algorithms have been devised utilising very low cut-offs at first presentation and rapid sequential measurements based on admission and 3 h sampling, most recently with admission and 1 h sampling. Such troponin algorithms would be even more ideally suited to point-of-care testing as the TAT achieved by the diagnostic laboratory of typically 60 min corresponds to the sampling interval required by the clinician using the algorithm. However, the limits of detection and analytical imprecision required to utilise these algorithms is not yet met by any easy-to-use POCT systems
- âŠ