5 research outputs found
Model-based relative entropy stochastic search
Stochastic search algorithms are general black-box optimizers. Due to their ease
of use and their generality, they have recently also gained a lot of attention in operations
research, machine learning and policy search. Yet, these algorithms require
a lot of evaluations of the objective, scale poorly with the problem dimension, are
affected by highly noisy objective functions and may converge prematurely. To
alleviate these problems, we introduce a new surrogate-based stochastic search
approach. We learn simple, quadratic surrogate models of the objective function.
As the quality of such a quadratic approximation is limited, we do not greedily exploit
the learned models. The algorithm can be misled by an inaccurate optimum
introduced by the surrogate. Instead, we use information theoretic constraints to
bound the ‘distance’ between the new and old data distribution while maximizing
the objective function. Additionally the new method is able to sustain the exploration
of the search distribution to avoid premature convergence. We compare our
method with state of art black-box optimization methods on standard uni-modal
and multi-modal optimization functions, on simulated planar robot tasks and a
complex robot ball throwing task. The proposed method considerably outperforms
the existing approaches
Intensive Surrogate Model Exploitation in Self-adaptive Surrogate-assisted CMA-ES (saACM-ES)
International audienceThis paper presents a new mechanism for a better exploitation of surrogate models in the framework of Evolution Strategies (ESs). This mechanism is instantiated here on the self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategy (saACM-ES), a recently proposed surrogate-assisted variant of CMA-ES. As well as in the original saACM-ES, the expensive function is optimized by exploiting the surrogate model, whose hyper-parameters are also optimized online. The main novelty concerns a more intensive exploitation of the surrogate model by using much larger population sizes for its optimization. The new variant of saACM-ES significantly improves the original saACM-ES and further increases the speed-up compared to the CMA-ES, especially on unimodal functions (e.g., on 20-dimensional Rotated Ellipsoid, saACM-ES is 6 times faster than aCMA-ES and almost by one order of magnitude faster than CMA-ES). The empirical validation on the BBOB-2013 noiseless testbed demonstrates the efficiency and the robustness of the proposed mechanism
Information theoretic stochastic search
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and PortoOptimization is the research field that studies the design of algorithms for finding the
best solutions to problems we may throw at them. While the whole domain is practically
important, the present thesis will focus on the subfield of continuous black-box
optimization, presenting a collection of novel, state-of-the-art algorithms for solving
problems in that class. In this thesis, we introduce two novel general-purpose
stochastic search algorithms for black box optimisation. Stochastic search algorithms
aim at repeating the type of mutations that led to fittest search points in a population.
We can model those mutations by a stochastic distribution. Typically the stochastic
distribution is modelled as a multivariate Gaussian distribution. The key idea is to
iteratively change the parameters of the distribution towards higher expected fitness.
However we leverage information theoretic trust regions and limit the change of the
new distribution. We show how plain maximisation of the fitness expectation without
bounding the change of the distribution is destined to fail because of overfitting
and the results in premature convergence. Being derived from first principles, the
proposed methods can be elegantly extended to contextual learning setting which allows
for learning context dependent stochastic distributions that generates optimal
individuals for a given context, i.e, instead of learning one task at a time, we can
learn multiple related tasks at once. However, the search distribution typically uses
a parametric model using some hand-defined context features. Finding good context
features is a challenging task, and hence, non-parametric methods are often preferred
over their parametric counter-parts. Therefore, we further propose a non-parametric
contextual stochastic search algorithm that can learn a non-parametric search distribution
for multiple tasks simultaneously.Otimização é área de investigação que estuda o projeto de algoritmos para encontrar
as melhores soluções, tendo em conta um conjunto de critérios, para problemas
complexos. Embora todo o domínio de otimização tenha grande importância,
este trabalho está focado no subcampo da otimização contínua de caixa preta,
apresentando uma coleção de novos algoritmos novos de última geração para resolver
problemas nessa classe. Nesta tese, apresentamos dois novos algoritmos de
pesquisa estocástica de propósito geral para otimização de caixa preta. Os algoritmos
de pesquisa estocástica visam repetir o tipo de mutações que levaram aos
melhores pontos de pesquisa numa população. Podemos modelar essas mutações
por meio de uma distribuição estocástica e, tipicamente, a distribuição estocástica
é modelada como uma distribuição Gaussiana multivariada. A ideia chave é mudar
iterativamente os parâmetros da distribuição incrementando a avaliação. No entanto,
alavancamos as regiões de confiança teóricas de informação e limitamos a mudança
de distribuição. Deste modo, demonstra-se como a maximização simples da expectativa
de “fitness”, sem limites da mudança da distribuição, está destinada a falhar
devido ao “overfitness” e à convergência prematura resultantes. Sendo derivado dos
primeiros princípios, as abordagens propostas podem ser ampliadas, de forma elegante,
para a configuração de aprendizagem contextual que permite a aprendizagem
de distribuições estocásticas dependentes do contexto que geram os indivíduos ideais
para um determinado contexto. No entanto, a distribuição de pesquisa geralmente usa
um modelo paramétrico linear em algumas das características contextuais definidas
manualmente. Encontrar uma contextos bem definidos é uma tarefa desafiadora e,
portanto, os métodos não paramétricos são frequentemente preferidos em relação às
seus semelhantes paramétricos. Portanto, propomos um algoritmo não paramétrico
de pesquisa estocástica contextual que possa aprender uma distribuição de pesquisa
não-paramétrica para várias tarefas simultaneamente.FCT - Fundação para a Ciência e a Tecnologia. As well as fundings by European Union’s
FP7 under EuRoC grant agreement CP-IP 608849 and by LIACC (UID/CEC/00027/2015)
and IEETA (UID/CEC/00127/2015)