3 research outputs found

    Predictive entropy search for multi-objective Bayesian optimization with constraints

    Get PDF
    This work presents PESMOC, Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints, an information-based strategy for the simultaneous optimization of multiple expensive-to- evaluate black-box functions under the presence of several constraints. Iteratively, PESMOC chooses an input location on which to evaluate the objective functions and the constraints so as to maximally reduce the entropy of the Pareto set of the corresponding optimization problem. The constraints considered in PESMOC are assumed to have similar properties to those of the objectives in typical Bayesian optimization problems. That is, they do not have a known expression (which prevents any gradient computation), their evaluation is considered to be very expensive, and the resulting observations may be corrupted by noise. Importantly, in PESMOC the acquisition function is decomposed as a sum of objective and constraint specific acquisition functions. This enables the use of the algorithm in decoupled evaluation scenarios in which objectives and constraints can be evaluated separately and perhaps with different costs. Therefore, PESMOC not only makes intelligent decisions about where to evaluate next the problem objectives and constraints, but also about which objective or constraint to evaluate next. We present strong empirical evidence in the form of synthetic, benchmark and real-world experiments that illustrate the effectiveness of PESMOC. In these experiments PESMOC outperforms other state-of-the-art methods for constrained multi-objective Bayesian optimization based on a generalization of the expected improvement. The results obtained also show that a decoupled evaluation scenario can lead to significant improvements over a coupled one in which objectives and constraints are evaluated at the same input.The authors acknowledge the use of the facilities of Centro de Computaci on Cient ca (CCC) at Universidad Aut onoma de Madrid, and nancial support from the Spanish Plan Nacional I+D+i, Grants TIN2016-76406-P and TEC2016-81900- REDT, and from Comunidad de Madrid, Grant S2013/ICE-2845 CASI-CAM-CM

    Bayesian optimization of a hybrid system for robust ocean wave features prediction

    Full text link
    In the last years, Bayesian optimization (BO) has emerged as a practical tool for high-quality parameter selection in prediction systems. BO methods are useful for optimizing black-box objective functions that either lack an analytical expression, or are very expensive to evaluate. In this paper, we show that BO can be used to obtain the optimal parameters of a prediction system for problems related to ocean wave features prediction. Specifically, we propose the Bayesian optimization of a hybrid Grouping Genetic Algorithm for attribute selection combined with an Extreme Learning Machine (GGA-ELM) approach for prediction. The system uses data from neighbor stations (usually buoys) in order to predict the significant wave height and the wave energy flux at a goal marine structure facility. The proposed BO methodology has been tested in a real problem involving buoys data in the Western coast of the USA, improving the performance of the GGA-ELM without a BO approach.This work has been partially supported by Comunidad de Madrid , under projects number S2013/ICE-2933 and S2013/ICE- 2845, and by National projects TIN2014-54583-C2-2-R, TIN2013- 42351-P and TIN2016-76406-P of the Spanish Ministerial Commis- sion of Science and Technology (MICYT) . We acknowledge support by DAMA network TIN2015-70308-REDT. We acknowledge the use of the facilities of Centro de Computación Cientíca de la UAM

    Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks

    No full text
    The PC algorithm is a popular method for learning the structure of Gaussian Bayesian networks. It carries out statistical tests to determine absent edges in the network. It is hence governed by two parameters: (i) The type of test, and (ii) its significance level. These parameters are usually set to values recommended by an expert. Nevertheless, such an approach can suffer from human bias, leading to suboptimal reconstruction results. In this paper we consider a more principled approach for choosing these parameters in an automatic way. For this we optimize a reconstruction score evaluated on a set of different Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a closed-form expression, which means that Bayesian optimization (BO) is a natural choice. BO methods use a model to guide the search and are hence able to exploit smoothness properties of the objective surface. We show that the parameters found by a BO method outperform those found by a random search strategy and the expert recommendation. Importantly, we have found that an often overlooked statistical test provides the best over-all reconstruction results
    corecore