188 research outputs found

    a history of probabilistic inductive logic programming

    Get PDF
    The field of Probabilistic Logic Programming (PLP) has seen significant advances in the last 20 years, with many proposals for languages that combine probability with logic programming. Since the start, the problem of learning probabilistic logic programs has been the focus of much attention. Learning these programs represents a whole subfield of Inductive Logic Programming (ILP). In Probabilistic ILP (PILP), two problems are considered: learning the parameters of a program given the structure (the rules) and learning both the structure and the parameters. Usually, structure learning systems use parameter learning as a subroutine. In this article, we present an overview of PILP and discuss the main results

    An abductive-inductive algorithm for probabilistic inductive logic programming

    No full text
    The integration of abduction and induction has lead to a variety of non-monotonic ILP systems. XHAIL is one of these systems, in which abduction is used to compute hypotheses that subsume Kernel Sets. On the other hand, Peircebayes is a recently proposed logic-based probabilistic programming approach that combines abduction with parameter learning to learn distributions of most likely explanations. In this paper, we propose an approach for integrating probabilistic inference with ILP. The basic idea is to redefine the inductive task of XHAIL as a statistical abduction, and to use Peircebayes to learn probability distribution of hypotheses. An initial evaluation of the proposed algorithm is given using synthetic data

    SkILL - a Stochastic Inductive Logic Learner

    Full text link
    Probabilistic Inductive Logic Programming (PILP) is a rel- atively unexplored area of Statistical Relational Learning which extends classic Inductive Logic Programming (ILP). This work introduces SkILL, a Stochastic Inductive Logic Learner, which takes probabilistic annotated data and produces First Order Logic theories. Data in several domains such as medicine and bioinformatics have an inherent degree of uncer- tainty, that can be used to produce models closer to reality. SkILL can not only use this type of probabilistic data to extract non-trivial knowl- edge from databases, but it also addresses efficiency issues by introducing a novel, efficient and effective search strategy to guide the search in PILP environments. The capabilities of SkILL are demonstrated in three dif- ferent datasets: (i) a synthetic toy example used to validate the system, (ii) a probabilistic adaptation of a well-known biological metabolism ap- plication, and (iii) a real world medical dataset in the breast cancer domain. Results show that SkILL can perform as well as a deterministic ILP learner, while also being able to incorporate probabilistic knowledge that would otherwise not be considered

    Inference with Constrained Hidden Markov Models in PRISM

    Full text link
    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. Defining HMMs with side-constraints in Constraint Logic Programming have advantages in terms of more compact expression and pruning opportunities during inference. We present a PRISM-based framework for extending HMMs with side-constraints and show how well-known constraints such as cardinality and all different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment

    Lifted Variable Elimination for Probabilistic Logic Programming

    Full text link
    Lifted inference has been proposed for various probabilistic logical frameworks in order to compute the probability of queries in a time that depends on the size of the domains of the random variables rather than the number of instances. Even if various authors have underlined its importance for probabilistic logic programming (PLP), lifted inference has been applied up to now only to relational languages outside of logic programming. In this paper we adapt Generalized Counting First Order Variable Elimination (GC-FOVE) to the problem of computing the probability of queries to probabilistic logic programs under the distribution semantics. In particular, we extend the Prolog Factor Language (PFL) to include two new types of factors that are needed for representing ProbLog programs. These factors take into account the existing causal independence relationships among random variables and are managed by the extension to variable elimination proposed by Zhang and Poole for dealing with convergent variables and heterogeneous factors. Two new operators are added to GC-FOVE for treating heterogeneous factors. The resulting algorithm, called LP2^2 for Lifted Probabilistic Logic Programming, has been implemented by modifying the PFL implementation of GC-FOVE and tested on three benchmarks for lifted inference. A comparison with PITA and ProbLog2 shows the potential of the approach.Comment: To appear in Theory and Practice of Logic Programming (TPLP). arXiv admin note: text overlap with arXiv:1402.0565 by other author

    Declarative Modeling and Bayesian Inference of Dark Matter Halos

    Full text link
    Probabilistic programming allows specification of probabilistic models in a declarative manner. Recently, several new software systems and languages for probabilistic programming have been developed on the basis of newly developed and improved methods for approximate inference in probabilistic models. In this contribution a probabilistic model for an idealized dark matter localization problem is described. We first derive the probabilistic model for the inference of dark matter locations and masses, and then show how this model can be implemented using BUGS and Infer.NET, two software systems for probabilistic programming. Finally, the different capabilities of both systems are discussed. The presented dark matter model includes mainly non-conjugate factors, thus, it is difficult to implement this model with Infer.NET.Comment: Presented at the Workshop "Intelligent Information Processing", EUROCAST2013. To appear in selected papers of Computer Aided Systems Theory - EUROCAST 2013; Volumes Editors: Roberto Moreno-D\'iaz, Franz R. Pichler, Alexis Quesada-Arencibia; LNCS Springe

    Information and communication technology solutions for outdoor navigation in dementia

    Get PDF
    INTRODUCTION: Information and communication technology (ICT) is potentially mature enough to empower outdoor and social activities in dementia. However, actual ICT-based devices have limited functionality and impact, mainly limited to safety. What is an ideal operational framework to enhance this field to support outdoor and social activities? METHODS: Review of literature and cross-disciplinary expert discussion. RESULTS: A situation-aware ICT requires a flexible fine-tuning by stakeholders of system usability and complexity of function, and of user safety and autonomy. It should operate by artificial intelligence/machine learning and should reflect harmonized stakeholder values, social context, and user residual cognitive functions. ICT services should be proposed at the prodromal stage of dementia and should be carefully validated within the life space of users in terms of quality of life, social activities, and costs. DISCUSSION: The operational framework has the potential to produce ICT and services with high clinical impact but requires substantial investment
    corecore