17 research outputs found

    Query rewriting under linear EL knowledge bases

    Get PDF
    With the adoption of the recent SPARQL 1.1 standard, RDF databases are capable of directly answering more expressive queries than simple conjunctive queries. In this paper we exploit such capabilities to answer conjunctive queries (CQs) under ontologies expressed in the description logic called linear EL-lin, a restricted form of EL. In particular, we show a query answering algorithm that rewrites a given CQ into a conjunctive regular path query (CRPQ) which, evaluated on the given instance, returns the correct answer. Our technique is based on the representation of infinite unions of CQs by non-deterministic finite-state automata. Our results achieve optimal data complexity, as well as producing rewritings straightforwardly implementable in SPARQL 1.1

    Requirements modelling and formal analysis using graph operations

    Get PDF
    The increasing complexity of enterprise systems requires a more advanced analysis of the representation of services expected than is currently possible. Consequently, the specification stage, which could be facilitated by formal verification, becomes very important to the system life-cycle. This paper presents a formal modelling approach, which may be used in order to better represent the reality of the system and to verify the awaited or existing system’s properties, taking into account the environmental characteristics. For that, we firstly propose a formalization process based upon properties specification, and secondly we use Conceptual Graphs operations to develop reasoning mechanisms of verifying requirements statements. The graphic visualization of these reasoning enables us to correctly capture the system specifications by making it easier to determine if desired properties hold. It is applied to the field of Enterprise modelling

    STYPES: nonrecursive datalog rewriter for linear TGDs and conjunctive queries

    Get PDF
    We present STYPES, a system that rewrites ontology-mediated queries with linear tuple-generating dependencies and conjunctive queries to equivalent nonrecursive datalog (NDL) queries. The main feature of STYPES is that it produces polynomial-size rewritings whenever the treewidth of the input conjunctive queries and the size of the chases for the ontology atoms as well as their arity are bounded; moreover, the rewritings can be constructed and executed in LOGCFL, indicating high parallelisability in theory. We show experimentally that Apache Flink on a cluster of machines with 20 virtual CPUs is indeed able to parallelise execution of a series of NDL-rewritings constructed by STYPES, with the time decreasing proportionally to the number of CPUs available

    Multi-Criteria Decision Making with Existential Rules using Repair Techniques

    Get PDF
    International audienceThe central problem in multi-criteria decision making is to reach an acceptable decision aggregating preferences over multiple criteria. In this paper, we explain how to benefit from the reasoning capabilities of existential rules for modelling a multi-criteria decision making problem as an inconsistent knowledge base. The repairs of this knowledge base represent the maximally consistent point of views and inference strategies can be used for decision making

    A formalism unifying Defeasible Logics and Repair Semantics for existential rules

    No full text
    International audienceTwo prominent ways of handling inconsistency provided by the state of the art are repair semantics and Defeasible Logics. In this paper we place ourselves in the setting of inconsistent knowledge bases expressed using existential rules and investigate how these approaches relate to each other. We run an experiment that checks how human intuitions align with those of either repair-based or defeasible methods and propose a new semantics combining both worlds

    SHACL constraints with inference rules

    No full text
    The Shapes Constraint Language (SHACL) has been recently introduced as a W3C recommendation to define constraints that can be validated against RDF graphs. Interactions of SHACL with other Semantic Web technologies, such as ontologies or reasoners, is a matter of ongoing research. In this paper we study the interaction of a subset of SHACL with inference rules expressed in datalog. On the one hand, SHACL constraints can be used to define a "schema" for graph datasets. On the other hand, inference rules can lead to the discovery of new facts that do not match the original schema. Given a set of SHACL constraints and a set of datalog rules, we present a method to detect which constraints could be violated by the application of the inference rules on some graph instance of the schema, and update the original schema, i.e, the set of SHACL constraints, in order to capture the new facts that can be inferred. We provide theoretical and experimental results of the various components of our approach

    A Datalog+/-Domain-Specific Durum Wheat Knowledge Base

    No full text
    International audienceWe consider the application setting where a domain-specific knowledge base about Durum Wheat has been constructed by knowledge engineers who are not experts in the domain. This knowledge base is prone to inconsistencies and incompleteness. The goal of this work is to show how the state of the art knowledge representation formalism called Datalog± can be used to cope with such problems by (1) providing inconsistency-tolerant techniques to cope with inconsistency, and (2) providing an expressive logical language that allows representing incomplete knowledge

    Is Your Database System a Semantic Web Reasoner?

    No full text

    An Incremental Algorithm for Computing All Repairs in Inconsistent Knowledge Bases

    No full text
    International audienceRepair techniques are used for reasoning in presence of inconsistencies. Such techniques rely on optimisations to avoid the computation of all repairs while certain applications need the generation of all repairs. In this paper, we show that the problem of all repair computation is not trivial in practice. To account for a scalable solution, we provide an incremental approach for the computation of all repairs when the conflicts have a cardinality of at most three. We empirically study its performance on generated knowledge bases (where the knowledge base generator could be seen as a secondary contribution in itself)
    corecore