70 research outputs found

    Do Repeat Yourself: Understanding Sufficient Conditions for Restricted Chase Non-Termination

    Full text link
    The disjunctive restricted chase is a sound and complete procedure for solving boolean conjunctive query entailment over knowledge bases of disjunctive existential rules. Alas, this procedure does not always terminate and checking if it does is undecidable. However, we can use acyclicity notions (sufficient conditions that imply termination) to effectively apply the chase in many real-world cases. To know if these conditions are as general as possible, we can use cyclicity notions (sufficient conditions that imply non-termination). In this paper, we discuss some issues with previously existing cyclicity notions, propose some novel notions for non-termination by dismantling the original idea, and empirically verify the generality of the new criteria

    Finite Herbrand Models for Restricted First-Order Clauses

    Get PDF
    We call a Herbrand model of a set of first-order clauses finite, if each of the predicates in the clauses is interpreted by a finite set of ground terms. We consider first-order clauses with the signature restricted to unary predicate and function symbols and one variable. Deciding the existence of a finite Herbrand model for a set of such clauses is known to be ExpTime-hard even when clauses are restricted to an anti-Horn form. Here we present an ExpTime algorithm to decide if a finite Herbrand model exists in the more general case of arbitrary clauses. Moreover, we describe a way to generate finite Herbrand models, if they exist. Since there can be infinitely many minimal finite Herbrand models, we propose a new notion of acyclic Herbrand models. If there is a finite Herbrand model for a set of restricted clauses, then there are finitely many (at most triple-exponentially many) acyclic Herbrand models. We show how to generate all of them

    Closed-World Semantics for Query Answering in Temporal Description Logics

    Get PDF
    Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. Many real-world applications, like processing electronic health records (EHRs), also contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. Our contribution consists of three main parts: Firstly, we introduce a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH⊥, which is based on the minimal universal model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity. Secondly, we introduce a new temporal variant of ELH⊥ that features a convexity operator. We extend this minimal-world semantics for answering metric temporal conjunctive queries with negation over the logic and obtain similar rewritability and complexity results. Thirdly, apart from the theoretical results, we evaluate minimal-world semantics in practice by selecting patients, based their EHRs, that match given criteria

    An SMT-based verification framework for software systems handling arrays

    Get PDF
    Recent advances in the areas of automated reasoning and first-order theorem proving paved the way to the developing of effective tools for the rigorous formal analysis of computer systems. Nowadays many formal verification frameworks are built over highly engineered tools (SMT-solvers) implementing decision procedures for quantifier- free fragments of theories of interest for (dis)proving properties of software or hardware products. The goal of this thesis is to go beyond the quantifier-free case and enable sound and effective solutions for the analysis of software systems requiring the usage of quantifiers. This is the case, for example, of software systems handling array variables, since meaningful properties about arrays (e.g., "the array is sorted") can be expressed only by exploiting quantification. The first contribution of this thesis is the definition of a new Lazy Abstraction with Interpolants framework in which arrays can be handled in a natural manner. We identify a fragment of the theory of arrays admitting quantifier-free interpolation and provide an effective quantifier-free interpolation algorithm. The combination of this result with an important preprocessing technique allows the generation of the required quantified formulae. Second, we prove that accelerations, i.e., transitive closures, of an interesting class of relations over arrays are definable in the theory of arrays via Exists-Forall-first order formulae. We further show that the theoretical importance of this result has a practical relevance: Once the (problematic) nested quantifiers are suitably handled, acceleration offers a precise (not over-approximated) alternative to abstraction solutions. Third, we present new decision procedures for quantified fragments of the theories of arrays. Our decision procedures are fully declarative, parametric in the theories describing the structure of the indexes and the elements of the arrays and orthogonal with respect to known results. Fourth, by leveraging our new results on acceleration and decision procedures, we show that the problem of checking the safety of an important class of programs with arrays is fully decidable. The thesis presents along with theoretical results practical engineering strategies for the effective implementation of a framework combining the aforementioned results: The declarative nature of our contributions allows for the definition of an integrated framework able to effectively check the safety of programs handling array variables while overcoming the individual limitations of the presented techniques

    Dealing with Inconsistencies and Updates in Description Logic Knowledge Bases

    Get PDF
    The main purpose of an "Ontology-based Information System" (OIS) is to provide an explicit description of the domain of interest, called ontology, and let all the functions of the system be based on such representation, thus freeing the users from the knowledge about the physical repositories where the real data reside. The functionalities that an OIS should provide to the user include both query answering, whose goal is to extract information from the system, and update, whose goal is to modify the information content of the system in order to reflect changes in the domain of interest. The "ontology" is a formal, high quality intentional representation of the domain, designed in such a way to avoid inconsistencies in the modeling of concepts and relationships. On the contrary, the extensional level of the system, constituted by a set of autonomous, heterogeneous data sources, is built independently from the conceptualization represented by the ontology, and therefore may contain information that is incoherent with the ontology itself. This dissertation presents a detailed study on the problem of dealing with inconsistencies in OISs, both in query answering, and in performing updates. We concentrate on the case where the knowledge base in the OISs is expressed in Description Logics, especially the logics of the DL-lite family. As for query answering, we propose both semantical frameworks that are inconsistency-tolerant, and techniques for answering unions of conjunctive queries posed to OISs under such inconsistency-tolerant semantics. As for updates, we present an approach to compute the result of updating a possibly inconsistent OIS with both insertion and deletion of extensional knowledge

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    An Effective and Efficient Inference Control System for Relational Database Queries

    Get PDF
    Protecting confidential information in relational databases while ensuring availability of public information at the same time is a demanding task. Unwanted information flows due to the reasoning capabilities of database users require sophisticated inference control mechanisms, since access control is in general not sufficient to guarantee the preservation of confidentiality. The policy-driven approach of Controlled Query Evaluation (CQE) turned out to be an effective means for controlling inferences in databases that can be modeled in a logical framework. It uses a censor function to determine whether or not the honest answer to a user query enables the user to disclose confidential information which is declared in form of a confidentiality policy. In doing so, CQE also takes answers to previous queries and the user’s background knowledge about the inner workings of the mechanism into account. Relational databases are usually modeled using first-order logic. In this context, the decision problem to be solved by the CQE censor becomes undecidable in general because the censor basically performs theorem proving over an ever growing user log. In this thesis, we develop a stateless CQE mechanism that does not need to maintain such a user log but still reaches the declarative goals of inference control. This feature comes at the price of several restrictions for the database administrator who declares the schema of the database, the security administrator who declares the information to be kept confidential, and the database user who sends queries to the database. We first investigate a scenario with quite restricted possibilities for expressing queries and confidentiality policies and propose an efficient stateless CQE mechanism. Due to the assumed restrictions, the censor function of this mechanism reduces to a simple pattern matching. Based on this case, we systematically enhance the proposed query and policy languages and investigate the respective effects on confidentiality. We suitably adapt the stateless CQE mechanism to these enhancements and formally prove the preservation of confidentiality. Finally, we develop efficient algorithmic implementations of stateless CQE, thereby showing that inference control in relational databases is feasible for actual relational database management systems under suitable restrictions
    • …
    corecore