Inference in the presence of outliers is an important field of research as
outliers are ubiquitous and may arise across a variety of problems and domains.
Bayesian optimization is method that heavily relies on probabilistic inference.
This allows outstanding sample efficiency because the probabilistic machinery
provides a memory of the whole optimization process. However, that virtue
becomes a disadvantage when the memory is populated with outliers, inducing
bias in the estimation. In this paper, we present an empirical evaluation of
Bayesian optimization methods in the presence of outliers. The empirical
evidence shows that Bayesian optimization with robust regression often produces
suboptimal results. We then propose a new algorithm which combines robust
regression (a Gaussian process with Student-t likelihood) with outlier
diagnostics to classify data points as outliers or inliers. By using an
scheduler for the classification of outliers, our method is more efficient and
has better convergence over the standard robust regression. Furthermore, we
show that even in controlled situations with no expected outliers, our method
is able to produce better results.Comment: 10 pages (2 of references), 6 figures, 1 algorith