research

Towards an inclusion driven learning of Bayesian Networks

Abstract

Two or more Bayesian Networks are Markov equivalent when their corresponding acyclic digraphs encode the same set of conditional independence (= CI) restrictions. Therefore, the search space of Bayesian Networks may be organized in classes of equivalence, where each of them consists of a particular set of CI restrictions. The collection of sets of CI restrictions obeys a partial order, the graphical Markov model inclusion partial order, or inclusion order for short. This paper discusses in depth the role that inclusion order plays in learning the structure of Bayesian networks. We prove that under very special conditions the traditional hill-climber always recovers the right structure. Moreover, we extend the recent experimental results presented in (Kocka and Castelo, 2001). We show how learning algorithms for Bayesian Networks, that take the inclusion order into account, perform better than those that do not, and we introduce two new ones in the context of heuristic search and the MCMC method

    Similar works