711 research outputs found

    Joint Probability Trees

    Full text link
    We introduce Joint Probability Trees (JPT), a novel approach that makes learning of and reasoning about joint probability distributions tractable for practical applications. JPTs support both symbolic and subsymbolic variables in a single hybrid model, and they do not rely on prior knowledge about variable dependencies or families of distributions. JPT representations build on tree structures that partition the problem space into relevant subregions that are elicited from the training data instead of postulating a rigid dependency model prior to learning. Learning and reasoning scale linearly in JPTs, and the tree structure allows white-box reasoning about any posterior probability P(QE)P(Q|E), such that interpretable explanations can be provided for any inference result. Our experiments showcase the practical applicability of JPTs in high-dimensional heterogeneous probability spaces with millions of training samples, making it a promising alternative to classic probabilistic graphical models

    A Note on Probability Trees

    Get PDF
    Not many introductory probability and statistics textbooks emphasize the use of probability trees to make complex probability calculations. This is puzzling in view of the power that trees bring to organizing such calculations for students. An effective classroom technique is discussed is this note

    Learning recursive probability trees from probabilistic potentials

    Get PDF
    A recursive probability tree (RPT) is an incipient data structure for representing the distributions in a probabilistic graphical model. RPTs capture most of the types of independencies found in a probability distribution. The explicit representation of these features using RPTs simplifies computations during inference. This paper describes a learning algorithm that builds a RPT from a probability distribution. Experiments prove that this algorithm generates a good approximation of the original distribution, thus making available all the advantages provided by RPT

    New strategies for finding multiplicative decompositions of probability trees

    Get PDF
    Probability trees are a powerful data structure for representing probabilistic potentials. However, their complexity can become intractable if they represent a probability distribution over a large set of variables. In this paper, we study the problem of decomposing a probability tree as a product of smaller trees, with the aim of being able to handle bigger probabilistic potentials. We propose exact and approximate approaches and evaluate their behaviour through an extensive set of experiments

    Dynamic Importance Sampling in Bayesian Networks Based on Probability Trees

    Get PDF
    In this paper we introduce a new dynamic importance sampling propagation algorithm for Bayesian networks. Importance sampling is based on using an auxiliary sampling distribution from which a set of con gurations of the variables in the network is drawn, and the performance of the algorithm depends on the variance of the weights associated with the simulated con gurations. The basic idea of dynamic importance sampling is to use the simulation of a con guration to modify the sampling distribution in order to improve its quality and so reducing the variance of the future weights. The paper shows that this can be achieved with a low computational effort. The experiments carried out show that the nal results can be very good even in the case that the initial sampling distribution is far away from the optimum

    Dynamic importance sampling in Bayesian networks based on probability trees

    Get PDF
    In this paper we introduce a new dynamic importance sampling propagation algorithm for Bayesian networks. Importance sampling is based on using an auxiliary sampling distribution from which a set of configurations of the variables in the network is drawn, and the performance of the algorithm depends on the variance of the weights associated with the simulated configurations. The basic idea of dynamic importance sampling is to use the simulation of a configuration to modify the sampling distribution in order to improve its quality and so reducing the variance of the future weights. The paper shows that this can be achieved with a low computational effort. The experiments carried out show that the final results can be very good even in the case that the initial sampling distribution is far away from the optimum. 2004 Elsevier Inc. All rights reserved.Spanish Ministry of Science and Technology, project Elvira II (TIC2001-2973-C05-01 and 02

    Bayesian Causal Induction

    Full text link
    Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans extrapolate from past experience to new, unseen situations: that is, they encode beliefs over causal invariances, allowing for sound generalization from the observations they obtain from directly acting in the world. Here we outline a Bayesian model of causal induction where beliefs over competing causal hypotheses are modeled using probability trees. Based on this model, we illustrate why, in the general case, we need interventions plus constraints on our causal hypotheses in order to extract causal information from our experience.Comment: 4 pages, 4 figures; 2011 NIPS Workshop on Philosophy and Machine Learnin
    corecore