37 research outputs found

    Untangling Typechecking of Intersections and Unions

    Full text link
    Intersection and union types denote conjunctions and disjunctions of properties. Using bidirectional typechecking, intersection types are relatively straightforward, but union types present challenges. For union types, we can case-analyze a subterm of union type when it appears in evaluation position (replacing the subterm with a variable, and checking that term twice under appropriate assumptions). This technique preserves soundness in a call-by-value semantics. Sadly, there are so many choices of subterms that a direct implementation is not practical. But carefully transforming programs into let-normal form drastically reduces the number of choices. The key results are soundness and completeness: a typing derivation (in the system with too many subterm choices) exists for a program if and only if a derivation exists for the let-normalized program.Comment: In Proceedings ITRS 2010, arXiv:1101.410

    Tridirectional typechecking

    Full text link

    Tridirectional typechecking

    Full text link

    Search for Program Structure

    Get PDF
    The community of programming language research loves the Curry-Howard correspondence between proofs and programs. Cut-elimination as computation, theorems for free, \u27call/cc\u27 as excluded middle, dependently typed languages as proof assistants, etc. Yet we have, for all these years, missed an obvious observation: "the structure of programs corresponds to the structure of proof search". For pure programs and intuitionistic logic, more is known about the latter than the former. We think we know what programs are, but logicians know better! To motivate the study of proof search for program structure, we retrace recent research on applying focusing to study the canonical structure of simply-typed lambda-terms. We then motivate the open problem of extending canonical forms to support richer type systems, such as polymorphism, by discussing a few enticing applications of more canonical program representations

    Refinement Types for Logical Frameworks and Their Interpretation as Proof Irrelevance

    Full text link
    Refinement types sharpen systems of simple and dependent types by offering expressive means to more precisely classify well-typed terms. We present a system of refinement types for LF in the style of recent formulations where only canonical forms are well-typed. Both the usual LF rules and the rules for type refinements are bidirectional, leading to a straightforward proof of decidability of typechecking even in the presence of intersection types. Because we insist on canonical forms, structural rules for subtyping can now be derived rather than being assumed as primitive. We illustrate the expressive power of our system with examples and validate its design by demonstrating a precise correspondence with traditional presentations of subtyping. Proof irrelevance provides a mechanism for selectively hiding the identities of terms in type theories. We show that LF refinement types can be interpreted as predicates using proof irrelevance, establishing a uniform relationship between two previously studied concepts in type theory. The interpretation and its correctness proof are surprisingly complex, lending support to the claim that refinement types are a fundamental construct rather than just a convenient surface syntax for certain uses of proof irrelevance

    A Language-based Approach to Specification and Enforcement of Architectural Protocols

    Full text link

    A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions

    Full text link
    The paper describes the refinement algorithm for the Calculus of (Co)Inductive Constructions (CIC) implemented in the interactive theorem prover Matita. The refinement algorithm is in charge of giving a meaning to the terms, types and proof terms directly written by the user or generated by using tactics, decision procedures or general automation. The terms are written in an "external syntax" meant to be user friendly that allows omission of information, untyped binders and a certain liberal use of user defined sub-typing. The refiner modifies the terms to obtain related well typed terms in the internal syntax understood by the kernel of the ITP. In particular, it acts as a type inference algorithm when all the binders are untyped. The proposed algorithm is bi-directional: given a term in external syntax and a type expected for the term, it propagates as much typing information as possible towards the leaves of the term. Traditional mono-directional algorithms, instead, proceed in a bottom-up way by inferring the type of a sub-term and comparing (unifying) it with the type expected by its context only at the end. We propose some novel bi-directional rules for CIC that are particularly effective. Among the benefits of bi-directionality we have better error message reporting and better inference of dependent types. Moreover, thanks to bi-directionality, the coercion system for sub-typing is more effective and type inference generates simpler unification problems that are more likely to be solved by the inherently incomplete higher order unification algorithms implemented. Finally we introduce in the external syntax the notion of vector of placeholders that enables to omit at once an arbitrary number of arguments. Vectors of placeholders allow a trivial implementation of implicit arguments and greatly simplify the implementation of primitive and simple tactics
    corecore