14 research outputs found

    Products of Weighted Logic Programs

    Get PDF
    www.lti.cs.cmu.edu © 2008, Shay B. Cohen and Robert J. Simmons and Noah A. SmithAbstract. Weighted logic programming, a generalization of bottom-up logic programming, is a successful framework for specifying dynamic programming algorithms. In this setting, proofs correspond to the algorithm’s output space, such as a path through a graph or a grammatical derivation, and are given a weighted score, often interpreted as a probability, that depends on the score of the base axioms used in the proof. The desired output is a function over all possible proofs, such as a sum of scores or an optimal score. We describe the PRODUCT transformation, which can merge two weighted logic programs into a new one. The resulting program optimizes a product of proof scores from the original programs, constituting a scoring function known in machine learning as a “product of experts. ” Through the addition of intuitive constraining side conditions, we show that several important dynamic programming algorithms can be derived by applying PRODUCT to weighted logic programs corresponding to simpler weighted logic programs. This report is an extended version of [3]. 1

    Probabilistic Programming Concepts

    Full text link
    A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been developed since more than 20 years

    Exact Decoding for Phrase-Based Statistical Machine Translation

    Full text link

    Approximate PCFG Parsing Using Tensor Decomposition

    Get PDF
    We provide an approximation algorithm for PCFG parsing, which asymptotically improves time complexity with respect to the input grammar size, and prove upper bounds on the approximation quality. We test our algorithm on two treebanks, and get significant improvements in parsing speed.

    Coalgebraic Semantics for Probabilistic Logic Programming

    Get PDF
    Probabilistic logic programming is increasingly important in artificial intelligence and related fields as a formalism to reason about uncertainty. It generalises logic programming with the possibility of annotating clauses with probabilities. This paper proposes a coalgebraic semantics on probabilistic logic programming. Programs are modelled as coalgebras for a certain functor F, and two semantics are given in terms of cofree coalgebras. First, the F-coalgebra yields a semantics in terms of derivation trees. Second, by embedding F into another type G, as cofree G-coalgebra we obtain a `possible worlds' interpretation of programs, from which one may recover the usual distribution semantics of probabilistic logic programming. Furthermore, we show that a similar approach can be used to provide a coalgebraic semantics to weighted logic programming

    Coalgebraic semantics for probabilistic logic programming

    Get PDF
    Probabilistic logic programming is increasingly important in artificial intelligence and related fields as a formalism to reason about uncertainty. It generalises logic programming with the possibility of annotating clauses with probabilities. This paper proposes a coalgebraic semantics on probabilistic logic programming. Programs are modelled as coalgebras for a certain functor F, and two semantics are given in terms of cofree coalgebras. First, the cofree F-coalgebra yields a semantics in terms of derivation trees. Second, by embedding F into another type G, as cofree G-coalgebra we obtain a 'possible worlds' interpretation of programs, from which one may recover the usual distribution semantics of probabilistic logic programming. Furthermore, we show that a similar approach can be used to provide a coalgebraic semantics to weighted logic programming

    Probabilistic (logic) programming concepts

    Get PDF
    A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position and survey state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been considered for over 20 years

    Exact Decoding for Phrase-Based Statistical Machine Translation

    Get PDF
    Abstract The combinatorial space of translation derivations in phrase-based statistical machine translation is given by the intersection between a translation lattice and a target language model. We replace this intractable intersection by a tractable relaxation which incorporates a low-order upperbound on the language model. Exact optimisation is achieved through a coarseto-fine strategy with connections to adaptive rejection sampling. We perform exact optimisation with unpruned language models of order 3 to 5 and show searcherror curves for beam search and cube pruning on standard test sets. This is the first work to tractably tackle exact optimisation with language models of orders higher than 3
    corecore