9 research outputs found

    Approximate learning of high dimensional Bayesian network structures via pruning of Candidate Parent Sets.

    Get PDF
    Score-based algorithms that learn Bayesian Network (BN) structures provide solutions ranging from different levels of approximate learning to exact learning. Approximate solutions exist because exact learning is generally not applicable to networks of moderate or higher complexity. In general, approximate solutions tend to sacrifice accuracy for speed, where the aim is to minimise the loss in accuracy and maximise the gain in speed. While some approximate algorithms are optimised to handle thousands of variables, these algorithms may still be unable to learn such high dimensional structures. Some of the most efficient score-based algorithms cast the structure learning problem as a combinatorial optimisation of candidate parent sets. This paper explores a strategy towards pruning the size of candidate parent sets, aimed at high dimensionality problems. The results illustrate how different levels of pruning affect the learning speed relative to the loss in accuracy in terms of model fitting, and show that aggressive pruning may be required to produce approximate solutions for high complexity problems

    Learning Bayesian Networks with the Saiyan algorithm

    Get PDF
    Some structure learning algorithms have proven to be effective in reconstructing hypothetical Bayesian Network (BN) graphs from synthetic data. However, in their mission to maximise a scoring function, many become conservative and minimise edges discovered. While simplicity is desired, the output is often a graph that consists of multiple independent graphical fragments or variables that do not enable full propagation of evidence. While this is not a problem in theory, it can be a problem in practice. This paper presents a novel unconventional heuristic local-search structure learning algorithm, called Saiyan, which returns a directed acyclic graph that enables full propagation of evidence. Forcing the algorithm to connect all data variables and to direct all of the edges discovered implies that the additional forced arcs are not expected to be correct at the rate of those identified unrestrictedly, and this evidently has a negative impact on the evaluation score of the discovered graph. Still, based on both synthetic and real-world experiments, the Saiyan algorithm demonstrates competitive performance relative to other state-of-the-art constraint-based, score-based, and hybrid structure learning algorithms

    Learning All Credible Bayesian Network Structures for Model Averaging

    Full text link
    A Bayesian network is a widely used probabilistic graphical model with applications in knowledge discovery and prediction. Learning a Bayesian network (BN) from data can be cast as an optimization problem using the well-known score-and-search approach. However, selecting a single model (i.e., the best scoring BN) can be misleading or may not achieve the best possible accuracy. An alternative to committing to a single model is to perform some form of Bayesian or frequentist model averaging, where the space of possible BNs is sampled or enumerated in some fashion. Unfortunately, existing approaches for model averaging either severely restrict the structure of the Bayesian network or have only been shown to scale to networks with fewer than 30 random variables. In this paper, we propose a novel approach to model averaging inspired by performance guarantees in approximation algorithms. Our approach has two primary advantages. First, our approach only considers credible models in that they are optimal or near-optimal in score. Second, our approach is more efficient and scales to significantly larger Bayesian networks than existing approaches.Comment: under review by JMLR. arXiv admin note: substantial text overlap with arXiv:1811.0503

    Learning Bayesian networks with ancestral constraints

    Get PDF
    Abstract We consider the problem of learning Bayesian networks optimally, when subject to background knowledge in the form of ancestral constraints. Our approach is based on a recently proposed framework for optimal structure learning based on non-decomposable scores, which is general enough to accommodate ancestral constraints. The proposed framework exploits oracles for learning structures using decomposable scores, which cannot accommodate ancestral constraints since they are non-decomposable. We show how to empower these oracles by passing them decomposable constraints that they can handle, which are inferred from ancestral constraints that they cannot handle. Empirically, we demonstrate that our approach can be orders-of-magnitude more efficient than alternative frameworks, such as those based on integer linear programming

    Learning Bayesian networks with local structure, mixed variables, and exact algorithms

    Get PDF
    Modern exact algorithms for structure learning in Bayesian networks first compute an exact local score of every candidate parent set, and then find a network structure by combinatorial optimization so as to maximize the global score. This approach assumes that each local score can be computed fast, which can be problematic when the scarcity of the data calls for structured local models or when there are both continuous and discrete variables, for these cases have lacked efficient-to-compute local scores. To address this challenge, we introduce a local score that is based on a class of classification and regression trees. We show that under modest restrictions on the possible branchings in the tree structure, it is feasible to find a structure that maximizes a Bayes score in a range of moderate-size problem instances. In particular, this enables global optimization of the Bayesian network structure, including the local structure. In addition, we introduce a related model class that extends ordinary conditional probability tables to continuous variables by employing an adaptive discretization approach. The two model classes are compared empirically by learning Bayesian networks from benchmark real-world and synthetic data sets. We discuss the relative strengths of the model classes in terms of their structure learning capability, predictive performance, and running time. (C) 2019 The Authors. Published by Elsevier Inc.Peer reviewe

    A survey of Bayesian Network structure learning

    Get PDF
    corecore