163,046 research outputs found
Evolution strategies for robust optimization
Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but fail in practice. Robust optimization is the practice of optimization that actively accounts for uncertainties and/or noise. Evolutionary Algorithms form a class of optimization algorithms that use the principle of evolution to find good solutions to optimization problems. Because uncertainty and noise are indispensable parts of nature, this class of optimization algorithms seems to be a logical choice for robust optimization scenarios. This thesis provides a clear definition of the term robust optimization and a comparison and practical guidelines on how Evolution Strategies, a subclass of Evolutionary Algorithms for real-parameter optimization problems, should be adapted for such scenarios.UBL - phd migration 201
Robust feedback switching control: dynamic programming and viscosity solutions
We consider a robust switching control problem. The controller only observes
the evolution of the state process, and thus uses feedback (closed-loop)
switching strategies, a non standard class of switching controls introduced in
this paper. The adverse player (nature) chooses open-loop controls that
represent the so-called Knightian uncertainty, i.e., misspecifications of the
model. The (half) game switcher versus nature is then formulated as a two-step
(robust) optimization problem. We develop the stochastic Perron method in this
framework, and prove that it produces a viscosity sub and supersolution to a
system of Hamilton-Jacobi-Bellman (HJB) variational inequalities, which
envelope the value function. Together with a comparison principle, this
characterizes the value function of the game as the unique viscosity solution
to the HJB equation, and shows as a byproduct the dynamic programming principle
for robust feedback switching control problem.Comment: to appear on SIAM Journal on Control and Optimizatio
A provably correct MPC approach to safety control of urban traffic networks
Model predictive control (MPC) is a popular strategy for urban traffic management that is able to incorporate physical and user defined constraints. However, the current MPC methods rely on finite horizon predictions that are unable to guarantee desirable behaviors over long periods of time. In this paper we design an MPC strategy that is guaranteed to keep the evolution of a network in a desirable yet arbitrary -safe- set, while optimizing a finite horizon cost function. Our approach relies on finding a robust controlled invariant set inside the safe set that provides an appropriate terminal constraint for the MPC optimization problem. An illustrative example is included.This work was partially supported by the NSF under grants CPS-1446151 and CMMI-1400167. (CPS-1446151 - NSF; CMMI-1400167 - NSF
ES Is More Than Just a Traditional Finite-Difference Approximator
An evolution strategy (ES) variant based on a simplification of a natural
evolution strategy recently attracted attention because it performs
surprisingly well in challenging deep reinforcement learning domains. It
searches for neural network parameters by generating perturbations to the
current set of parameters, checking their performance, and moving in the
aggregate direction of higher reward. Because it resembles a traditional
finite-difference approximation of the reward gradient, it can naturally be
confused with one. However, this ES optimizes for a different gradient than
just reward: It optimizes for the average reward of the entire population,
thereby seeking parameters that are robust to perturbation. This difference can
channel ES into distinct areas of the search space relative to gradient
descent, and also consequently to networks with distinct properties. This
unique robustness-seeking property, and its consequences for optimization, are
demonstrated in several domains. They include humanoid locomotion, where
networks from policy gradient-based reinforcement learning are significantly
less robust to parameter perturbation than ES-based policies solving the same
task. While the implications of such robustness and robustness-seeking remain
open to further study, this work's main contribution is to highlight such
differences and their potential importance
Learning Deep Similarity Metric for 3D MR-TRUS Registration
Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance
(MR) images for guiding targeted prostate biopsy has significantly improved the
biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image
registration. However, it is very challenging to obtain a robust automatic
MR-TRUS registration due to the large appearance difference between the two
imaging modalities. The work presented in this paper aims to tackle this
problem by addressing two challenges: (i) the definition of a suitable
similarity metric and (ii) the determination of a suitable optimization
strategy.
Methods: This work proposes the use of a deep convolutional neural network to
learn a similarity metric for MR-TRUS registration. We also use a composite
optimization strategy that explores the solution space in order to search for a
suitable initialization for the second-order optimization of the learned
metric. Further, a multi-pass approach is used in order to smooth the metric
for optimization.
Results: The learned similarity metric outperforms the classical mutual
information and also the state-of-the-art MIND feature based methods. The
results indicate that the overall registration framework has a large capture
range. The proposed deep similarity metric based approach obtained a mean TRE
of 3.86mm (with an initial TRE of 16mm) for this challenging problem.
Conclusion: A similarity metric that is learned using a deep neural network
can be used to assess the quality of any given image registration and can be
used in conjunction with the aforementioned optimization framework to perform
automatic registration that is robust to poor initialization.Comment: To appear on IJCAR
Online Selection of CMA-ES Variants
In the field of evolutionary computation, one of the most challenging topics
is algorithm selection. Knowing which heuristics to use for which optimization
problem is key to obtaining high-quality solutions. We aim to extend this
research topic by taking a first step towards a selection method for adaptive
CMA-ES algorithms. We build upon the theoretical work done by van Rijn
\textit{et al.} [PPSN'18], in which the potential of switching between
different CMA-ES variants was quantified in the context of a modular CMA-ES
framework.
We demonstrate in this work that their proposed approach is not very
reliable, in that implementing the suggested adaptive configurations does not
yield the predicted performance gains. We propose a revised approach, which
results in a more robust fit between predicted and actual performance. The
adaptive CMA-ES approach obtains performance gains on 18 out of 24 tested
functions of the BBOB benchmark, with stable advantages of up to 23\%. An
analysis of module activation indicates which modules are most crucial for the
different phases of optimizing each of the 24 benchmark problems. The module
activation also suggests that additional gains are possible when including the
(B)IPOP modules, which we have excluded for this present work.Comment: To appear at Genetic and Evolutionary Computation Conference
(GECCO'19) Appendix will be added in due tim
- âŠ