30,612 research outputs found

    An anytime deduction heuristic for first order probabilistic logic

    Get PDF
    This thesis describes an anytime deduction heuristic to address the decision and optimization form of the First Order Probabilistic Logic problem which was revived by Nilsson in 1986. Reasoning under uncertainty is always an important issue for AI applications, e.g., expert systems, automated theorem-provers, etc. Among the proposed models and methods for dealing with uncertainty, some as, e.g., Nilsson's ones, are based on logic and probability. Nilsson revisited the early works of Boole (1854) and Hailperin (1976) and reformulated them in an AI framework. The decision form of the probabilistic logic problem, also known as PSAT, consists of finding, given a set of logical sentences together with their probability value to be true, whether the set of sentences and their probability value is consistent. In the optimization form, assuming that a system of probabilistic formulas is already consistent, the problem is: Given an additional sentence, find the tightest possible probability bounds such that the overall system remains consistent with that additional sentence. Solution schemes, both heuristic and exact, have been proposed within the propositional framework. Even though first order logic is more expressive than the propositional one, more works have been published in the propositional framework. The main objective of this thesis is to propose a solution scheme based on a heuristic approach, i.e., an anytime deduction technique, for the decision and optimization form of first order probabilistic logic problem. Jaumard et al. [33] proposed an anytime deduction algorithm for the propositional probabilistic logic which we extended to the first order context

    Anytime Computation of Cautious Consequences in Answer Set Programming

    Full text link
    Query answering in Answer Set Programming (ASP) is usually solved by computing (a subset of) the cautious consequences of a logic program. This task is computationally very hard, and there are programs for which computing cautious consequences is not viable in reasonable time. However, current ASP solvers produce the (whole) set of cautious consequences only at the end of their computation. This paper reports on strategies for computing cautious consequences, also introducing anytime algorithms able to produce sound answers during the computation.Comment: To appear in Theory and Practice of Logic Programmin

    Heuristic Ranking in Tightly Coupled Probabilistic Description Logics

    Full text link
    The Semantic Web effort has steadily been gaining traction in the recent years. In particular,Web search companies are recently realizing that their products need to evolve towards having richer semantic search capabilities. Description logics (DLs) have been adopted as the formal underpinnings for Semantic Web languages used in describing ontologies. Reasoning under uncertainty has recently taken a leading role in this arena, given the nature of data found on theWeb. In this paper, we present a probabilistic extension of the DL EL++ (which underlies the OWL2 EL profile) using Markov logic networks (MLNs) as probabilistic semantics. This extension is tightly coupled, meaning that probabilistic annotations in formulas can refer to objects in the ontology. We show that, even though the tightly coupled nature of our language means that many basic operations are data-intractable, we can leverage a sublanguage of MLNs that allows to rank the atomic consequences of an ontology relative to their probability values (called ranking queries) even when these values are not fully computed. We present an anytime algorithm to answer ranking queries, and provide an upper bound on the error that it incurs, as well as a criterion to decide when results are guaranteed to be correct.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012

    Ancestral Causal Inference

    Get PDF
    Constraint-based causal discovery from limited data is a notoriously difficult challenge due to the many borderline independence test decisions. Several approaches to improve the reliability of the predictions by exploiting redundancy in the independence information have been proposed recently. Though promising, existing approaches can still be greatly improved in terms of accuracy and scalability. We present a novel method that reduces the combinatorial explosion of the search space by using a more coarse-grained representation of causal information, drastically reducing computation time. Additionally, we propose a method to score causal predictions based on their confidence. Crucially, our implementation also allows one to easily combine observational and interventional data and to incorporate various types of available background knowledge. We prove soundness and asymptotic consistency of our method and demonstrate that it can outperform the state-of-the-art on synthetic data, achieving a speedup of several orders of magnitude. We illustrate its practical feasibility by applying it on a challenging protein data set.Comment: In Proceedings of Advances in Neural Information Processing Systems 29 (NIPS 2016

    Renormalization and Computation II: Time Cut-off and the Halting Problem

    Full text link
    This is the second installment to the project initiated in [Ma3]. In the first Part, I argued that both philosophy and technique of the perturbative renormalization in quantum field theory could be meaningfully transplanted to the theory of computation, and sketched several contexts supporting this view. In this second part, I address some of the issues raised in [Ma3] and provide their development in three contexts: a categorification of the algorithmic computations; time cut--off and Anytime Algorithms; and finally, a Hopf algebra renormalization of the Halting Problem.Comment: 28 page

    Renormalisation and computation II: time cut-off and the Halting Problem

    No full text
    corecore