2 research outputs found
Bayesian Quantile and Expectile Optimisation
Bayesian optimisation is widely used to optimise stochastic black box
functions. While most strategies are focused on optimising conditional
expectations, a large variety of applications require risk-averse decisions and
alternative criteria accounting for the distribution tails need to be
considered. In this paper, we propose new variational models for Bayesian
quantile and expectile regression that are well-suited for heteroscedastic
settings. Our models consist of two latent Gaussian processes accounting
respectively for the conditional quantile (or expectile) and variance that are
chained through asymmetric likelihood functions. Furthermore, we propose two
Bayesian optimisation strategies, either derived from a GP-UCB or Thompson
sampling, that are tailored to such models and that can accommodate large
batches of points. As illustrated in the experimental section, the proposed
approach clearly outperforms the state of the art
Variational Inference for Nonparametric Bayesian Quantile Regression
Quantile regression deals with the problem of computing robust estimators when the conditional mean and standard deviation of the predicted function are inadequate to capture its variability. The technique has an extensive list of applications, including health sciences, ecology and finance. In this work we present a non-parametric method of inferring quantiles and derive a novel Variational Bayesian (VB) approximation to the marginal likelihood, leading to an elegant Expectation Maximisation algorithm for learning the model. Our method is nonparametric, has strong convergence guarantees, and can deal with nonsymmetric quantiles seamlessly. We compare the method to other parametric and non-parametric Bayesian techniques, and alternative approximations based on expectation propagation demonstrating the benefits of our framework in toy problems and real datasets