3 research outputs found
Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks
The PC algorithm is a popular method for learning the structure of Gaussian
Bayesian networks. It carries out statistical tests to determine absent edges
in the network. It is hence governed by two parameters: (i) The type of test,
and (ii) its significance level. These parameters are usually set to values
recommended by an expert. Nevertheless, such an approach can suffer from human
bias, leading to suboptimal reconstruction results. In this paper we consider a
more principled approach for choosing these parameters in an automatic way. For
this we optimize a reconstruction score evaluated on a set of different
Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a
closed-form expression, which means that Bayesian optimization (BO) is a
natural choice. BO methods use a model to guide the search and are hence able
to exploit smoothness properties of the objective surface. We show that the
parameters found by a BO method outperform those found by a random search
strategy and the expert recommendation. Importantly, we have found that an
often overlooked statistical test provides the best over-all reconstruction
results
Suggesting Cooking Recipes Through Simulation and Bayesian Optimization
Cooking typically involves a plethora of decisions about ingredients and
tools that need to be chosen in order to write a good cooking recipe. Cooking
can be modelled in an optimization framework, as it involves a search space of
ingredients, kitchen tools, cooking times or temperatures. If we model as an
objective function the quality of the recipe, several problems arise. No
analytical expression can model all the recipes, so no gradients are available.
The objective function is subjective, in other words, it contains noise.
Moreover, evaluations are expensive both in time and human resources. Bayesian
Optimization (BO) emerges as an ideal methodology to tackle problems with these
characteristics. In this paper, we propose a methodology to suggest recipe
recommendations based on a Machine Learning (ML) model that fits real and
simulated data and BO. We provide empirical evidence with two experiments that
support the adequacy of the methodology
Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks
The PC algorithm is a popular method for learning the structure of Gaussian Bayesian networks. It carries out statistical tests to determine absent edges in the network. It is hence governed by two parameters: (i) The type of test, and (ii) its significance level. These parameters are usually set to values recommended by an expert. Nevertheless, such an approach can suffer from human bias, leading to suboptimal reconstruction results. In this paper we consider a more principled approach for choosing these parameters in an automatic way. For this we optimize a reconstruction score evaluated on a set of different Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a closed-form expression, which means that Bayesian optimization (BO) is a natural choice. BO methods use a model to guide the search and are hence able to exploit smoothness properties of the objective surface. We show that the parameters found by a BO method outperform those found by a random search strategy and the expert recommendation. Importantly, we have found that an often overlooked statistical test provides the best over-all reconstruction results