715 research outputs found

    Probabilistic inductive constraint logic

    Get PDF
    AbstractProbabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for "probabilistic inductive constraint logic". Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time

    Learning Effect Axioms via Probabilistic Logic Programming

    Get PDF
    In this paper we showed how we can automatically learn the structure and parameters of probabilistic effect axioms for the Simple Event Calculus (SEC) from positive and negative example interpretations stated as short dialogue sequences in natural language. We used the cplint framework for this task that provides libraries for structure and parameter learning and for answering queries with exact and inexact inference. The example dialogues that are used for learning the structure of the probabilistic logic program are parsed into dependency structures and then further translated into the Event Calculus notation with the help of a simple ontology. The novelty of our approach is that we can not only process uncertainty in event recognition but also learn the structure of effect axioms and combine these two sources of uncertainty to successfully answer queries under this probabilistic setting. Interestingly, our extension of the logic-based version of the SEC is completely elaboration-tolerant in the sense that the probabilistic version fully includes the logic-based version. This makes it possible to use the probabilistic version of the SEC in the traditional way as well as when we have to deal with uncertainty in the observed world. In the future, we would like to extend the probabilistic version of the SEC to deal -- among other things -- with concurrent actions and continuous change

    Interpolant-Based Transition Relation Approximation

    Full text link
    In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.Comment: Conference Version at CAV 2005. 17 Pages, 9 Figure
    • …
    corecore