8,071 research outputs found

    Symbolic Exact Inference for Discrete Probabilistic Programs

    Full text link
    The computational burden of probabilistic inference remains a hurdle for applying probabilistic programming languages to practical problems of interest. In this work, we provide a semantic and algorithmic foundation for efficient exact inference on discrete-valued finite-domain imperative probabilistic programs. We leverage and generalize efficient inference procedures for Bayesian networks, which exploit the structure of the network to decompose the inference task, thereby avoiding full path enumeration. To do this, we first compile probabilistic programs to a symbolic representation. Then we adapt techniques from the probabilistic logic programming and artificial intelligence communities in order to perform inference on the symbolic representation. We formalize our approach, prove it sound, and experimentally validate it against existing exact and approximate inference techniques. We show that our inference approach is competitive with inference procedures specialized for Bayesian networks, thereby expanding the class of probabilistic programs that can be practically analyzed

    Lifted Variable Elimination for Probabilistic Logic Programming

    Full text link
    Lifted inference has been proposed for various probabilistic logical frameworks in order to compute the probability of queries in a time that depends on the size of the domains of the random variables rather than the number of instances. Even if various authors have underlined its importance for probabilistic logic programming (PLP), lifted inference has been applied up to now only to relational languages outside of logic programming. In this paper we adapt Generalized Counting First Order Variable Elimination (GC-FOVE) to the problem of computing the probability of queries to probabilistic logic programs under the distribution semantics. In particular, we extend the Prolog Factor Language (PFL) to include two new types of factors that are needed for representing ProbLog programs. These factors take into account the existing causal independence relationships among random variables and are managed by the extension to variable elimination proposed by Zhang and Poole for dealing with convergent variables and heterogeneous factors. Two new operators are added to GC-FOVE for treating heterogeneous factors. The resulting algorithm, called LP2^2 for Lifted Probabilistic Logic Programming, has been implemented by modifying the PFL implementation of GC-FOVE and tested on three benchmarks for lifted inference. A comparison with PITA and ProbLog2 shows the potential of the approach.Comment: To appear in Theory and Practice of Logic Programming (TPLP). arXiv admin note: text overlap with arXiv:1402.0565 by other author

    Making inferences in incomplete Bayesian networks: A Dempster-Shafer belief function approach

    Get PDF
    How do you make inferences from a Bayesian network (BN) model with missing information? For example, we may not have priors for some variables or may not have conditionals for some states of the parent variables. It is well-known that the Dempster-Shafer (D-S) belief function theory is a generalization of probability theory. So, a solution is to embed an incomplete BN model in a D-S belief function model, omit the missing data, and then make inferences from the belief function model. We will demonstrate this using an implementation of a local computation algorithm for D-S belief function models called the “Belief function machine.” One advantage of this approach is that we get interval estimates of the probabilities of interest. Using Laplacian (equally likely) or maximum entropy priors or conditionals for missing data in a BN may lead to point estimates for the probabilities of interest, masking the uncertainty in these estimates. Bayesian reasoning cannot reason from an incomplete model. A Bayesian sensitivity analysis of the missing parameters is not a substitute for a belief-function analysis

    Combining link and content-based information in a Bayesian inference model for entity search

    No full text
    An architectural model of a Bayesian inference network to support entity search in semantic knowledge bases is presented. The model supports the explicit combination of primitive data type and object-level semantics under a single computational framework. A flexible query model is supported capable to reason with the availability of simple semantics in querie

    Conditionals and modularity in general logics

    Full text link
    In this work in progress, we discuss independence and interpolation and related topics for classical, modal, and non-monotonic logics

    Compositional Models in Valuation-Based Systems

    Get PDF
    This is the author final draft. Copyright 2014 ElsevierCompositional models were initially described for discrete probability theory, and later extended for possibility theory and for belief functions in Dempster–Shafer (D–S) theory of evidence. Valuation-based system (VBS) is an unifying theoretical framework generalizing some of the well known and frequently used uncertainty calculi. This generalization enables us to not only highlight the most important theoretical properties necessary for efficient inference (analogous to Bayesian inference in the framework of Bayesian network), but also to design efficient computational procedures. Some of the specific calculi covered by VBS are probability theory, a version of possibility theory where combination is the product t-norm, Spohn’s epistemic belief theory, and D–S belief function theory. In this paper, we describe compositional models in the general framework of VBS using the semantics of no-double counting, which is central to the VBS framework. Also, we show that conditioning can be expressed using the composition operator. We define a special case of compositional models called decomposable models, again in the VBS framework, and demonstrate that for the class of decomposable compositional models, conditioning can be done using local computation. As all results are obtained for the VBS framework, they hold in all calculi that fit in the VBS framework. For the D–S theory of belief functions, the compositional model defined here differs from the one studied by Jiroušek, Vejnarová, and Daniel. The latter model can also be described in the VBS framework, but with a combination operator that is different from Dempster’s rule of combination. For the version of possibility theory in which combination is the product t-norm, the compositional model defined here reduces to the one studied by Vejnarová

    The Joys of Graph Transformation

    Get PDF
    We believe that the technique of graph transformation offers a very natural way to specify semantics for languages that have dynamic allocation and linking structure; for instance, object-oriented programming languages, but also languages for mobility. In this note we expose, on a rather informal level, the reasons for this belief. Our hope in doing this is to raise interest in this technique and so generate more interest in the fascinating possibilities and open questions of this area.\u

    Model Checking Finite-Horizon Markov Chains with Probabilistic Inference

    Full text link
    We revisit the symbolic verification of Markov chains with respect to finite horizon reachability properties. The prevalent approach iteratively computes step-bounded state reachability probabilities. By contrast, recent advances in probabilistic inference suggest symbolically representing all horizon-length paths through the Markov chain. We ask whether this perspective advances the state-of-the-art in probabilistic model checking. First, we formally describe both approaches in order to highlight their key differences. Then, using these insights we develop Rubicon, a tool that transpiles Prism models to the probabilistic inference tool Dice. Finally, we demonstrate better scalability compared to probabilistic model checkers on selected benchmarks. All together, our results suggest that probabilistic inference is a valuable addition to the probabilistic model checking portfolio -- with Rubicon as a first step towards integrating both perspectives.Comment: Technical Report. Accepted at CAV 202

    On conditional belief functions in directed graphical models in the Dempster-Shafer theory

    Get PDF
    The primary goal is to define conditional belief functions in the Dempster-Shafer theory. We do so similarly to probability theory's notion of conditional probability tables. Conditional belief functions are necessary for constructing directed graphical belief function models in the same sense as conditional probability tables are necessary for constructing Bayesian networks. We provide examples of conditional belief functions, including those obtained by Smets' conditional embedding. Besides defining conditional belief functions, we state and prove a few basic properties of conditionals. In the belief-function literature, conditionals are defined starting from a joint belief function. Conditionals are then defined using the removal operator, an inverse of Dempster's combination operator. When such conditionals are well-defined belief functions, we show that our definition is equivalent to these definitions
    corecore