304 research outputs found

    Hybrid Bayesian Networks Using Mixtures of Truncated Basis Functions

    Get PDF
    This paper introduces MoTBFs, an R package for manipulating mixtures of truncated basis functions. This class of functions allows the representation of joint probability distributions involving discrete and continuous variables simultaneously, and includes mixtures of truncated exponentials and mixtures of polynomials as special cases. The package implements functions for learning the parameters of univariate, multivariate, and conditional distributions, and provides support for parameter learning in Bayesian networks with both discrete and continuous variables. Probabilistic inference using forward sampling is also implemented. Part of the functionality of the MoTBFs package relies on the bnlearn package, which includes functions for learning the structure of a Bayesian network from a data set. Leveraging this functionality, the MoTBFs package supports learning of MoTBF-based Bayesian networks over hybrid domains. We give a brief introduction to the methodological context and algorithms implemented in the package. An extensive illustrative example is used to describe the package, its functionality, and its usage

    Approximating probability density functions in hybrid Bayesian networks with mixtures of truncated exponentials

    Get PDF
    Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly using the Shenoy-Shafer architecture for computing marginals, with no restrictions on the construction of a join tree. This paper presents MTE potentials that approximate standard PDF’s and applications of these potentials for solving inference problems in hybrid Bayesian networks. These approximations will extend the types of inference problems that can be modeled with Bayesian networks, as demonstrated using three examples

    Approximate Probability Propagation with Mixtures of Truncated Exponentials*

    Get PDF
    Mixtures of truncated exponentials (MTEs) are a powerful alternative to discretisation when working with hybrid Bayesian networks. One of the features of the MTE model is that standard propagation algorithms can be used. However, the complexity of the process is too high and therefore approximate methods, which tradeoff complexity for accuracy, become necessary. In this paper we propose an approximate propagation algorithm for MTE networks which is based on the Penniless propagation method already known for discrete variables. We also consider how to use Markov Chain Monte Carlo to carry out the probability propagation. The performance of the proposed methods is analysed in a series of experiments with random networks

    Hybrid Bayesian Networks with Linear Deterministic Variables

    Get PDF
    When a hybrid Bayesian network has conditionally deterministic variables with continuous parents, the joint density function for the continuous variables does not exist. Conditional linear Gaussian distributions can handle such cases when the continuous variables have a multi-variate normal distribution and the discrete variables do not have continuous parents. In this paper, operations required for performing inference with conditionally deterministic variables in hybrid Bayesian networks are developed. These methods allow inference in networks with deterministic variables where continuous variables may be non-Gaussian, and their density functions can be approximated by mixtures of truncated exponentials. There are no constraints on the placement of continuous and discrete nodes in the network

    Practical Aspects of Solving Hybrid Bayesian Networks Containing Deterministic Conditionals

    Get PDF
    This is the author's final draft. Copyright 2015 WileyIn this paper we discuss some practical issues that arise in solv- ing hybrid Bayesian networks that include deterministic conditionals for continuous variables. We show how exact inference can become intractable even for small networks, due to the di culty in handling deterministic conditionals (for continuous variables). We propose some strategies for carrying out the inference task using mixtures of polyno- mials and mixtures of truncated exponentials. Mixtures of polynomials can be de ned on hypercubes or hyper-rhombuses. We compare these two methods. A key strategy is to re-approximate large potentials with potentials consisting of fewer pieces and lower degrees/number of terms. We discuss several methods for re-approximating potentials. We illustrate our methods in a practical application consisting of solv- ing a stochastic PERT network

    LEARNING BAYESIAN NETWORKS FOR REGRESSION FROM INCOMPLETE DATABASES*

    Get PDF
    In this paper we address the problem of inducing Bayesian network models for regression from incomplete databases. We use mixtures of truncated exponentials (MTEs) to represent the joint distribution in the induced networks. We consider two particular Bayesian network structures, the so-called na¨ıve Bayes and TAN, which have been successfully used as regression models when learning from complete data. We propose an iterative procedure for inducing the models, based on a variation of the data augmentation method in which the missing values of the explanatory variables are filled by simulating from their posterior distributions, while the missing values of the response variable are generated using the conditional expectation of the response given the explanatory variables. We also consider the refinement of the regression models by using variable selection and bias reduction. We illustrate through a set of experiments with various databases the performance of the proposed algorithms

    Learning hybrid Bayesian networks using mixtures of truncated exponentials

    Get PDF
    In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result of the algorithm is a network where the conditional distribution for each variable is a mixture of truncated exponentials (MTE), so that no restrictions on the network topology are imposed. The structure of the network is obtained by searching over the space of candidate networks using optimisation methods. The conditional densities are estimated by means of Gaussian kernel densities that afterwards are approximated by MTEs, so that the resulting network is appropriate for using standard algorithms for probabilistic reasoning. The behaviour of the proposed algorithm is tested using a set of real-world and arti cially generated databases
    • …
    corecore