780 research outputs found

    Undecidability of the unification and admissibility problems for modal and description logics

    Full text link
    We show that the unification problem `is there a substitution instance of a given formula that is provable in a given logic?' is undecidable for basic modal logics K and K4 extended with the universal modality. It follows that the admissibility problem for inference rules is undecidable for these logics as well. These are the first examples of standard decidable modal logics for which the unification and admissibility problems are undecidable. We also prove undecidability of the unification and admissibility problems for K and K4 with at least two modal operators and nominals (instead of the universal modality), thereby showing that these problems are undecidable for basic hybrid logics. Recently, unification has been introduced as an important reasoning service for description logics. The undecidability proof for K with nominals can be used to show the undecidability of unification for boolean description logics with nominals (such as ALCO and SHIQO). The undecidability proof for K with the universal modality can be used to show that the unification problem relative to role boxes is undecidable for Boolean description logic with transitive roles, inverse roles, and role hierarchies (such as SHI and SHIQ)

    On Irrelevance and Algorithmic Equality in Predicative Type Theory

    Full text link
    Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.Comment: 36 pages, superseds the FoSSaCS 2011 paper of the first author, titled "Irrelevance in Type Theory with a Heterogeneous Equality Judgement

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming

    Logic of Negation-Complete Interactive Proofs (Formal Theory of Epistemic Deciders)

    Get PDF
    We produce a decidable classical normal modal logic of internalised negation-complete and thus disjunctive non-monotonic interactive proofs (LDiiP) from an existing logical counterpart of non-monotonic or instant interactive proofs (LiiP). LDiiP internalises agent-centric proof theories that are negation-complete (maximal) and consistent (and hence strictly weaker than, for example, Peano Arithmetic) and enjoy the disjunction property (like Intuitionistic Logic). In other words, internalised proof theories are ultrafilters and all internalised proof goals are definite in the sense of being either provable or disprovable to an agent by means of disjunctive internalised proofs (thus also called epistemic deciders). Still, LDiiP itself is classical (monotonic, non-constructive), negation-incomplete, and does not have the disjunction property. The price to pay for the negation completeness of our interactive proofs is their non-monotonicity and non-communality (for singleton agent communities only). As a normal modal logic, LDiiP enjoys a standard Kripke-semantics, which we justify by invoking the Axiom of Choice on LiiP's and then construct in terms of a concrete oracle-computable function. LDiiP's agent-centric internalised notion of proof can also be viewed as a negation-complete disjunctive explicit refinement of standard KD45-belief, and yields a disjunctive but negation-incomplete explicit refinement of S4-provability.Comment: Expanded Introduction. Added Footnote 4. Corrected Corollary 3 and 4. Continuation of arXiv:1208.184

    Modalities and Parametric Adjoints

    Get PDF

    A refined interpretation of intuitionistic logic by means of atomic polymorphism

    Get PDF
    We study an alternative embedding of IPC into atomic system F whose translation of proofs is based, not on instantiation overflow, but instead on the admissibility of the elimination rules for disjunction and absurdity (where these connectives are defined according to the Russell–Prawitz translation). As compared to the embedding based on instantiation overflow, the alternative embedding works equally well at the levels of provability and preservation of proof identity, but it produces shorter derivations and shorter simulations of reduction sequences. Lambda-terms are employed in the technical development so that the algorithmic content is made explicit, both for the alternative and the original embeddings. The investigation of preservation of proof-reduction steps by the alternative embedding enables the analysis of generation of “administrative” redexes. These are the key, on the one hand, to understand the difference between the two embeddings; on the other hand, to understand whether the final word on the embedding of IPC into atomic system F has been said.The first author acknowledges support from Fundacao para a Ciencia e a Tecnologia (FCT) through project UID/MAT/00013/2013. The second author acknowledges support from FCT through projects UID/MAT/04561/2019 and UID/CEC/00408/2019 and she is also grateful to Centro de Matematica, Aplicacoes Fundamentais e InvestigacAo Operacional and to Large-Scale Informatics Systems Laboratory (Universidade de Lisboa)

    Refinement Types for Logical Frameworks and Their Interpretation as Proof Irrelevance

    Full text link
    Refinement types sharpen systems of simple and dependent types by offering expressive means to more precisely classify well-typed terms. We present a system of refinement types for LF in the style of recent formulations where only canonical forms are well-typed. Both the usual LF rules and the rules for type refinements are bidirectional, leading to a straightforward proof of decidability of typechecking even in the presence of intersection types. Because we insist on canonical forms, structural rules for subtyping can now be derived rather than being assumed as primitive. We illustrate the expressive power of our system with examples and validate its design by demonstrating a precise correspondence with traditional presentations of subtyping. Proof irrelevance provides a mechanism for selectively hiding the identities of terms in type theories. We show that LF refinement types can be interpreted as predicates using proof irrelevance, establishing a uniform relationship between two previously studied concepts in type theory. The interpretation and its correctness proof are surprisingly complex, lending support to the claim that refinement types are a fundamental construct rather than just a convenient surface syntax for certain uses of proof irrelevance

    Mechanizing the Metatheory of LF

    Get PDF
    LF is a dependent type theory in which many other formal systems can be conveniently embedded. However, correct use of LF relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for LF's judgments. Although detailed informal proofs of these properties have been published, they have not been formally verified in a theorem prover. We have formalized these properties within Isabelle/HOL using the Nominal Datatype Package, closely following a recent article by Harper and Pfenning. In the process, we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others. We also formally derive a version of the type checking algorithm from which Isabelle/HOL can generate executable code. Besides its intrinsic interest, our formalization provides a foundation for studying the adequacy of LF encodings, the correctness of Twelf-style metatheoretic reasoning, and the metatheory of extensions to LF.Comment: Accepted to ACM Transactions on Computational Logic. Preprint
    • …
    corecore