5,352 research outputs found

    Probabilistic description logics for subjective uncertainty

    Get PDF
    We propose a family of probabilistic description logics (DLs) that are derived in a principled way from Halpern's probabilistic first-order logic. The resulting probabilistic DLs have a two-dimensional semantics similar to temporal DLs and are well-suited for representing subjective probabilities. We carry out a detailed study of reasoning in the new family of logics, concentrating on probabilistic extensions of the DLs ALC and EL, and showing that the complexity ranges from PTime via ExpTime and 2ExpTime to undecidable

    Heuristic Ranking in Tightly Coupled Probabilistic Description Logics

    Full text link
    The Semantic Web effort has steadily been gaining traction in the recent years. In particular,Web search companies are recently realizing that their products need to evolve towards having richer semantic search capabilities. Description logics (DLs) have been adopted as the formal underpinnings for Semantic Web languages used in describing ontologies. Reasoning under uncertainty has recently taken a leading role in this arena, given the nature of data found on theWeb. In this paper, we present a probabilistic extension of the DL EL++ (which underlies the OWL2 EL profile) using Markov logic networks (MLNs) as probabilistic semantics. This extension is tightly coupled, meaning that probabilistic annotations in formulas can refer to objects in the ontology. We show that, even though the tightly coupled nature of our language means that many basic operations are data-intractable, we can leverage a sublanguage of MLNs that allows to rank the atomic consequences of an ontology relative to their probability values (called ranking queries) even when these values are not fully computed. We present an anytime algorithm to answer ranking queries, and provide an upper bound on the error that it incurs, as well as a criterion to decide when results are guaranteed to be correct.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012

    Semantics for Probabilistic Inference

    Full text link
    A number of writers(Joseph Halpern and Fahiem Bacchus among them) have offered semantics for formal languages in which inferences concerning probabilities can be made. Our concern is different. This paper provides a formalization of nonmonotonic inferences in which the conclusion is supported only to a certain degree. Such inferences are clearly 'invalid' since they must allow the falsity of a conclusion even when the premises are true. Nevertheless, such inferences can be characterized both syntactically and semantically. The 'premises' of probabilistic arguments are sets of statements (as in a database or knowledge base), the conclusions categorical statements in the language. We provide standards for both this form of inference, for which high probability is required, and for an inference in which the conclusion is qualified by an intermediate interval of support.Comment: Appears in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (UAI1992
    corecore