224 research outputs found

    Adding DL-Lite TBoxes to Proper Knowledge Bases

    Get PDF
    Levesque’s proper knowledge bases (proper KBs) correspond to infinite sets of ground positive and negative facts, with the notable property that for FOL formulas in a certain normal form, which includes conjunctive queries and positive queries possibly extended with a controlled form of negation, entailment reduces to formula evaluation. However proper KBs represent extensional knowledge only. In description logic terms, they correspond to ABoxes. In this paper, we augment them with DL-Lite TBoxes, expressing intensional knowledge (i.e., the ontology of the domain). DL-Lite has the notable property that conjunctive query answering over TBoxes and standard description logic ABoxes is re- ducible to formula evaluation over the ABox only. Here, we investigate whether such a property extends to ABoxes consisting of proper KBs. Specifically, we consider two DL-Lite variants: DL-Literdfs , roughly corresponding to RDFS, and DL-Lite_core , roughly corresponding to OWL 2 QL. We show that when a DL- Lite_rdfs TBox is coupled with a proper KB, the TBox can be compiled away, reducing query answering to evaluation on the proper KB alone. But this reduction is no longer possible when we associate proper KBs with DL-Lite_core TBoxes. Indeed, we show that in the latter case, query answering even for conjunctive queries becomes coNP-hard in data complexity

    Combined FO rewritability for conjunctive query answering in DL-Lite

    Get PDF
    Standard description logic (DL) reasoning services such as satisfiability and subsumption mainly aim to support TBox design. When the design stage is over and the TBox is used in an actual application, it is usually combined with instance data stored in an ABox, and therefore query answering becomes the most importan

    Query Rewriting and Optimization for Ontological Databases

    Full text link
    Ontological queries are evaluated against a knowledge base consisting of an extensional database and an ontology (i.e., a set of logical assertions and constraints which derive new intensional knowledge from the extensional database), rather than directly on the extensional database. The evaluation and optimization of such queries is an intriguing new problem for database research. In this paper, we discuss two important aspects of this problem: query rewriting and query optimization. Query rewriting consists of the compilation of an ontological query into an equivalent first-order query against the underlying extensional database. We present a novel query rewriting algorithm for rather general types of ontological constraints which is well-suited for practical implementations. In particular, we show how a conjunctive query against a knowledge base, expressed using linear and sticky existential rules, that is, members of the recently introduced Datalog+/- family of ontology languages, can be compiled into a union of conjunctive queries (UCQ) against the underlying database. Ontological query optimization, in this context, attempts to improve this rewriting process so to produce possibly small and cost-effective UCQ rewritings for an input query.Comment: arXiv admin note: text overlap with arXiv:1312.5914 by other author

    Time-Aware Probabilistic Knowledge Graphs

    Get PDF
    The emergence of open information extraction as a tool for constructing and expanding knowledge graphs has aided the growth of temporal data, for instance, YAGO, NELL and Wikidata. While YAGO and Wikidata maintain the valid time of facts, NELL records the time point at which a fact is retrieved from some Web corpora. Collectively, these knowledge graphs (KG) store facts extracted from Wikipedia and other sources. Due to the imprecise nature of the extraction tools that are used to build and expand KG, such as NELL, the facts in the KG are weighted (a confidence value representing the correctness of a fact). Additionally, NELL can be considered as a transaction time KG because every fact is associated with extraction date. On the other hand, YAGO and Wikidata use the valid time model because they maintain facts together with their validity time (temporal scope). In this paper, we propose a bitemporal model (that combines transaction and valid time models) for maintaining and querying bitemporal probabilistic knowledge graphs. We study coalescing and scalability of marginal and MAP inference. Moreover, we show that complexity of reasoning tasks in atemporal probabilistic KG carry over to the bitemporal setting. Finally, we report our evaluation results of the proposed model

    LiteMat: a scalable, cost-efficient inference encoding scheme for large RDF graphs

    Full text link
    The number of linked data sources and the size of the linked open data graph keep growing every day. As a consequence, semantic RDF services are more and more confronted with various "big data" problems. Query processing in the presence of inferences is one them. For instance, to complete the answer set of SPARQL queries, RDF database systems evaluate semantic RDFS relationships (subPropertyOf, subClassOf) through time-consuming query rewriting algorithms or space-consuming data materialization solutions. To reduce the memory footprint and ease the exchange of large datasets, these systems generally apply a dictionary approach for compressing triple data sizes by replacing resource identifiers (IRIs), blank nodes and literals with integer values. In this article, we present a structured resource identification scheme using a clever encoding of concepts and property hierarchies for efficiently evaluating the main common RDFS entailment rules while minimizing triple materialization and query rewriting. We will show how this encoding can be computed by a scalable parallel algorithm and directly be implemented over the Apache Spark framework. The efficiency of our encoding scheme is emphasized by an evaluation conducted over both synthetic and real world datasets.Comment: 8 pages, 1 figur

    Combining open and closed world reasoning for the semantic web

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaOne important problem in the ongoing standardization of knowledge representation languages for the Semantic Web is combining open world ontology languages, such as the OWL-based ones, and closed world rule-based languages. The main difficulty of such a combination is that both formalisms are quite orthogonal w.r.t. expressiveness and how decidability is achieved. Combining non-monotonic rules and ontologies is thus a challenging task that requires careful balancing between expressiveness of the knowledge representation language and the computational complexity of reasoning. In this thesis, we will argue in favor of a combination of ontologies and nonmonotonic rules that tightly integrates the two formalisms involved, that has a computational complexity that is as low as possible, and that allows us to query for information instead of calculating the whole model. As our starting point we choose the mature approach of hybrid MKNF knowledge bases, which is based on an adaptation of the Stable Model Semantics to knowledge bases consisting of ontology axioms and rules. We extend the two-valued framework of MKNF logics to a three-valued logics, and we propose a well-founded semantics for non-disjunctive hybrid MKNF knowledge bases. This new semantics promises to provide better efficiency of reasoning,and it is faithful w.r.t. the original two-valued MKNF semantics and compatible with both the OWL-based semantics and the traditional Well- Founded Semantics for logic programs. We provide an algorithm based on operators to compute the unique model, and we extend SLG resolution with tabling to a general framework that allows us to query a combination of non-monotonic rules and any given ontology language. Finally, we investigate concrete instances of that procedure w.r.t. three tractable ontology languages, namely the three description logics underlying the OWL 2 pro les.Fundação para a Ciência e Tecnologia - grant contract SFRH/BD/28745/200
    • …
    corecore