181 research outputs found

    Complexity of Nested Circumscription and Nested Abnormality Theories

    Full text link
    The need for a circumscriptive formalism that allows for simple yet elegant modular problem representation has led Lifschitz (AIJ, 1995) to introduce nested abnormality theories (NATs) as a tool for modular knowledge representation, tailored for applying circumscription to minimize exceptional circumstances. Abstracting from this particular objective, we propose L_{CIRC}, which is an extension of generic propositional circumscription by allowing propositional combinations and nesting of circumscriptive theories. As shown, NATs are naturally embedded into this language, and are in fact of equal expressive capability. We then analyze the complexity of L_{CIRC} and NATs, and in particular the effect of nesting. The latter is found to be a source of complexity, which climbs the Polynomial Hierarchy as the nesting depth increases and reaches PSPACE-completeness in the general case. We also identify meaningful syntactic fragments of NATs which have lower complexity. In particular, we show that the generalization of Horn circumscription in the NAT framework remains CONP-complete, and that Horn NATs without fixed letters can be efficiently transformed into an equivalent Horn CNF, which implies polynomial solvability of principal reasoning tasks. Finally, we also study extensions of NATs and briefly address the complexity in the first-order case. Our results give insight into the ``cost'' of using L_{CIRC} (resp. NATs) as a host language for expressing other formalisms such as action theories, narratives, or spatial theories.Comment: A preliminary abstract of this paper appeared in Proc. Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-01), pages 169--174. Morgan Kaufmann, 200

    The Language of Search

    Full text link
    This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    Datalog Rewritability of Disjunctive Datalog Programs and its Applications to Ontology Reasoning

    Full text link
    We study the problem of rewriting a disjunctive datalog program into plain datalog. We show that a disjunctive program is rewritable if and only if it is equivalent to a linear disjunctive program, thus providing a novel characterisation of datalog rewritability. Motivated by this result, we propose weakly linear disjunctive datalog---a novel rule-based KR language that extends both datalog and linear disjunctive datalog and for which reasoning is tractable in data complexity. We then explore applications of weakly linear programs to ontology reasoning and propose a tractable extension of OWL 2 RL with disjunctive axioms. Our empirical results suggest that many non-Horn ontologies can be reduced to weakly linear programs and that query answering over such ontologies using a datalog engine is feasible in practice.Comment: 14 pages. To appear at AAAI-1

    Approximate Assertional Reasoning Over Expressive Ontologies

    Get PDF
    In this thesis, approximate reasoning methods for scalable assertional reasoning are provided whose computational properties can be established in a well-understood way, namely in terms of soundness and completeness, and whose quality can be analyzed in terms of statistical measurements, namely recall and precision. The basic idea of these approximate reasoning methods is to speed up reasoning by trading off the quality of reasoning results against increased speed

    A Multi-Layered Architecture for Collaborative and Decentralized Consequence Finding

    Get PDF
    The consequence finding problem consists in producing all the consequences of a logical theory or, depending on the application context, in a restricted subset of these consequences. When the available knowledge is naturally scattered among different sources of information, computing such consequences with respect to the global theory in a decentralized way is a challenging problem. This paper presents Somewhere2, a multilayered architecture that may be used to solve such consequence finding problems in peer-to-peer networks of collaborating entities, that may evolve over time. The general layout of this architecture is described as well as the roles of its main components. Thanks to a careful and modular design, the resulting framework is very generic. This facilitates alternative implementations of specific components as well as its extension with additional features. First experimental results are presented, illustrating the scalability and robustness of this architecture. This framework may be used as a robust building block for more advanced distributed applications, such as Peer Data Management Systems

    Propositional update operators based on formula/literal dependence

    Get PDF
    International audienceWe present and study a general family of belief update operators in a propositional setting. Its operators are based on formula/literal dependence, which is more fine-grained than the notion of formula/variable dependence that was proposed in the literature: formula/variable dependence is a particular case of formula/literal dependence. Our update operators are defined according to the "forget-then-conjoin" scheme: updating a belief base by an input formula consists in first forgetting in the base every literal on which the input formula has a negative influence, and then conjoining the resulting base with the input formula. The operators of our family differ by the underlying notion of formula/literal dependence, which may be defined syntactically or semantically, and which may or may not exploit further information like known persistent literals and pre-set dependencies. We argue that this allows to handle the frame problem and the ramification problem in a more appropriate way. We evaluate the update operators of our family w.r.t. two important dimensions: the logical dimension, by checking the status of the Katsuno-Mendelzon postulates for update, and the computational dimension, by identifying the complexity of a number of decision problems (including model checking, consistency and inference), both in the general case and in some restricted cases, as well as by studying compactability issues. It follows that several operators of our family are interesting alternatives to previous belief update operators

    Relaxing and Restraining Queries for OBDA

    Get PDF
    In ontology-based data access (OBDA), ontologies have been successfully employed for querying possibly unstructured and incomplete data. In this paper, we advocate using ontologies not only to formulate queries and compute their answers, but also for modifying queries by relaxing or restraining them, so that they can retrieve either more or less answers over a given dataset. Towards this goal, we first illustrate that some domain knowledge that could be naturally leveraged in OBDA can be expressed using complex role inclusions (CRI). Queries over ontologies with CRI are not first-order (FO) rewritable in general. We propose an extension of DL-Lite with CRI, and show that conjunctive queries over ontologies in this extension are FO rewritable. Our main contribution is a set of rules to relax and restrain conjunctive queries (CQs). Firstly, we define rules that use the ontology to produce CQs that are relaxations/restrictions over any dataset. Secondly, we introduce a set of data-driven rules, that leverage patterns in the current dataset, to obtain more fine-grained relaxations and restrictions

    Type-elimination-based reasoning for the description logic SHIQbs using decision diagrams and disjunctive datalog

    Get PDF
    We propose a novel, type-elimination-based method for reasoning in the description logic SHIQbs including DL-safe rules. To this end, we first establish a knowledge compilation method converting the terminological part of an ALCIb knowledge base into an ordered binary decision diagram (OBDD) which represents a canonical model. This OBDD can in turn be transformed into disjunctive Datalog and merged with the assertional part of the knowledge base in order to perform combined reasoning. In order to leverage our technique for full SHIQbs, we provide a stepwise reduction from SHIQbs to ALCIb that preserves satisfiability and entailment of positive and negative ground facts. The proposed technique is shown to be worst case optimal w.r.t. combined and data complexity and easily admits extensions with ground conjunctive queries.Comment: 38 pages, 3 figures, camera ready version of paper accepted for publication in Logical Methods in Computer Scienc

    DFKI publications : the first four years ; 1990 - 1993

    Get PDF
    • …
    corecore