15 research outputs found

    Datalog Rewritability of Disjunctive Datalog Programs and its Applications to Ontology Reasoning

    Full text link
    We study the problem of rewriting a disjunctive datalog program into plain datalog. We show that a disjunctive program is rewritable if and only if it is equivalent to a linear disjunctive program, thus providing a novel characterisation of datalog rewritability. Motivated by this result, we propose weakly linear disjunctive datalog---a novel rule-based KR language that extends both datalog and linear disjunctive datalog and for which reasoning is tractable in data complexity. We then explore applications of weakly linear programs to ontology reasoning and propose a tractable extension of OWL 2 RL with disjunctive axioms. Our empirical results suggest that many non-Horn ontologies can be reduced to weakly linear programs and that query answering over such ontologies using a datalog engine is feasible in practice.Comment: 14 pages. To appear at AAAI-1

    A tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Our concern is the problem of efficiently determining the data complexity of answering queries mediated by descrip- tion logic ontologies and constructing their optimal rewritings to standard database queries. Originated in ontology- based data access and datalog optimisation, this problem is known to be computationally very complex in general, with no explicit syntactic characterisations available. In this article, aiming to understand the fundamental roots of this difficulty, we strip the problem to the bare bones and focus on Boolean conjunctive queries mediated by a simple cov- ering axiom stating that one class is covered by the union of two other classes. We show that, on the one hand, these rudimentary ontology-mediated queries, called disjunctive sirups (or d-sirups), capture many features and difficulties of the general case. For example, answering d-sirups is Π2p-complete for combined complexity and can be in AC0 or L-, NL-, P-, or coNP-complete for data complexity (with the problem of recognising FO-rewritability of d-sirups be- ing 2ExpTime-hard); some d-sirups only have exponential-size resolution proofs, some only double-exponential-size positive existential FO-rewritings and single-exponential-size nonrecursive datalog rewritings. On the other hand, we prove a few partial sufficient and necessary conditions of FO- and (symmetric/linear-) datalog rewritability of d- sirups. Our main technical result is a complete and transparent syntactic AC0 / NL / P / coNP tetrachotomy of d-sirups with disjoint covering classes and a path-shaped Boolean conjunctive query. To obtain this tetrachotomy, we develop new techniques for establishing P- and coNP-hardness of answering non-Horn ontology-mediated queries as well as showing that they can be answered in NL

    A tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Our concern is the problem of efficiently determining the data complexity of answering queries mediated by descrip- tion logic ontologies and constructing their optimal rewritings to standard database queries. Originated in ontology- based data access and datalog optimisation, this problem is known to be computationally very complex in general, with no explicit syntactic characterisations available. In this article, aiming to understand the fundamental roots of this difficulty, we strip the problem to the bare bones and focus on Boolean conjunctive queries mediated by a simple cov- ering axiom stating that one class is covered by the union of two other classes. We show that, on the one hand, these rudimentary ontology-mediated queries, called disjunctive sirups (or d-sirups), capture many features and difficulties of the general case. For example, answering d-sirups is Π2p-complete for combined complexity and can be in AC0 or L-, NL-, P-, or coNP-complete for data complexity (with the problem of recognising FO-rewritability of d-sirups be- ing 2ExpTime-hard); some d-sirups only have exponential-size resolution proofs, some only double-exponential-size positive existential FO-rewritings and single-exponential-size nonrecursive datalog rewritings. On the other hand, we prove a few partial sufficient and necessary conditions of FO- and (symmetric/linear-) datalog rewritability of d- sirups. Our main technical result is a complete and transparent syntactic AC0 / NL / P / coNP tetrachotomy of d-sirups with disjoint covering classes and a path-shaped Boolean conjunctive query. To obtain this tetrachotomy, we develop new techniques for establishing P- and coNP-hardness of answering non-Horn ontology-mediated queries as well as showing that they can be answered in NL

    A data complexity and rewritability tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Aiming to understand the data complexity of answering conjunctive queries mediated by an axiom stating that a class is covered by the union of two other classes, we show that deciding their first-order rewritability is PSPACE-hard and obtain a number of sufficient conditions for membership in AC0, L, NL, and P. Our main result is a complete syntactic AC0/NL/P/CONP tetrachotomy of path queries under the assumption that the covering classes are disjoint

    Decomposition by tree dimension in Horn clause verification

    Get PDF
    This volume contains the papers selected among those which were presented at the 3rd International Workshop on Verification and Program Transformation (VPT 2015) held in London, UK, on April 11th, 2015. Previous editions of the Workshop were held at Saint-Petersburg (Russia) in 2013, and Vienna (Austria) in 2014. Those papers show that methods and tools developed in the field of program transformation such as partial evaluation and fold/unfold transformations, and supercompilation, can be applied in the verification of software systems. They also show how some program verification methods, such as model checking techniques, abstract interpretation, SAT and SMT solving, and automated theorem proving, can be used to enhance program transformation techniques, thereby making these techniques more powerful and useful in practice

    The Fine-Grained Complexity of CFL Reachability

    Full text link
    Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in time O(n3)O(n^3), where nn is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as {\em fine-grained complexity}. Although recent efforts have shown some conditional lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (Boolean Matrix Multiplication and kk-Clique). We particularly focus on the family of Dyck-kk languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. We present new lower bounds for the case of sparse input graphs where the number of edges mm is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen's Pointer Analysis which significantly strengthens prior known results.Comment: Appeared in POPL 2023. Please note the erratum on the first pag

    Solving non-linear Horn clauses using a linear Horn clause solver

    Get PDF
    In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm then proceeds by applying the linearisation transformation and solver for linear Horn clauses to a sequence of sets of clauses with successively increasing dimension bound. The approach is then further developed by using a solution of clauses of lower dimension to (partially) linearise clauses of higher dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise.Comment: In Proceedings HCVS2016, arXiv:1607.0403

    A framework for automated concurrency verification

    Get PDF
    Reasoning systems based on Concurrent Separation Logic make verifying complex concurrent algorithms readily possible. Such algorithms contain subtle protocols of permission and resource transfer between threads; to cope with these intricacies, modern concurrent separation logics contain many moving parts and integrate many bespoke logical components. Verifying concurrent algorithms by hand consumes much time, effort, and expertise. As a result, computer-assisted verification is a fertile research topic, and fully automated verification is a popular research goal. Unfortunately, the complexity of modern concurrent separation logics makes them hard to automate, and the proliferation and fast turnover of such logics causes a downward pressure against building tools for new logics. As a result, many such logics lack tooling. This dissertation proposes Starling: a scheme for creating concurrent program logics that are automatable by construction. Starling adapts the existing Concurrent Views Framework for sound concurrent reasoning systems, overlaying a framework for reducing concurrent proof outlines to verification conditions in existing theories (such as those accepted by off-the-shelf sequential solvers). This dissertation describes Starling in a bottom-up, modular manner. First, it shows the derivation of a series of general concurrency proof rules from the Views framework. Next, it shows how one such rule leads to the Starling framework itself. From there, it outlines a series of increasingly elaborate frontends: ways of decomposing individual Hoare triples over atomic actions into verification conditions suitable for encoding into backend theories. Each frontend leads to a concurrent program logic. Finally, the dissertation presents a tool for verifying C-style concurrent proof outlines, based on one of the above frontends. It gives examples of such outlines, covering a variety of algorithms, backend solvers, and proof techniques

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    corecore