5 research outputs found

    TP-Compilation for inference in probabilistic logic programs

    Get PDF
    We propose TP -compilation, a new inference technique for probabilistic logic programs that is based on forward reasoning. TP -compilation proceeds incrementally in that it interleaves the knowledge compilation step for weighted model counting with forward reasoning on the logic program. This leads to a novel anytime algorithm that provides hard bounds on the inferred probabilities. The main difference with existing inference techniques for probabilistic logic programs is that these are a sequence of isolated transformations. Typically, these transformations include conversion of the ground program into an equivalent propositional formula and compilation of this formula into a more tractable target representation for weighted model counting. An empirical evaluation shows that TP -compilation effectively handles larger instances of complex or cyclic real-world problems than current sequential approaches, both for exact and anytime approximate inference. Furthermore, we show that TP -compilation is conducive to inference in dynamic domains as it supports efficient updates to the compiled model

    Value‐based potentials: Exploiting quantitative information regularity patterns in probabilistic graphical models

    Get PDF
    This study was jointly supported by the Spanish Ministry of Education and Science under projects PID2019-106758GB-C31 and TIN2016-77902-C3-2-P, and the European Regional Development Fund (FEDER). Funding for open access charge from Universidad de Granada/CBUA.When dealing with complex models (i.e., models with many variables, a high degree of dependency between variables, or many states per variable), the efficient representation of quantitative information in probabilistic graphical models (PGMs) is a challenging task. To address this problem, this study introduces several new structures, aptly named value‐based potentials (VBPs), which are based exclusively on the values. VBPs leverage repeated values to reduce memory requirements. In the present paper, they are compared with some common structures, like standard tables or unidimensional arrays, and probability trees (PT). Like VBPs, PTs are designed to reduce the memory space, but this is achieved only if value repetitions correspond to context‐specific independence patterns (i.e., repeated values are related to consecutive indices or configurations). VBPs are devised to overcome this limitation. The goal of this study is to analyze the properties of VBPs. We provide a theoretical analysis of VBPs and use them to encode the quantitative information of a set of well‐known Bayesian networks, measuring the access time to their content and the computational time required to perform some inference tasks.Spanish Government PID2019-106758GB-C31 TIN2016-77902-C3-2-PEuropean Commissio
    corecore