1,320 research outputs found

    A Type System For Call-By-Name Exceptions

    Full text link
    We present an extension of System F with call-by-name exceptions. The type system is enriched with two syntactic constructs: a union type for programs whose execution may raise an exception at top level, and a corruption type for programs that may raise an exception in any evaluation context (not necessarily at top level). We present the syntax and reduction rules of the system, as well as its typing and subtyping rules. We then study its properties, such as confluence. Finally, we construct a realizability model using orthogonality techniques, from which we deduce that well-typed programs are weakly normalizing and that the ones who have the type of natural numbers really compute a natural number, without raising exceptions.Comment: 25 page

    Dual-Context Calculi for Modal Logic

    Get PDF
    We present natural deduction systems and associated modal lambda calculi for the necessity fragments of the normal modal logics K, T, K4, GL and S4. These systems are in the dual-context style: they feature two distinct zones of assumptions, one of which can be thought as modal, and the other as intuitionistic. We show that these calculi have their roots in in sequent calculi. We then investigate their metatheory, equip them with a confluent and strongly normalizing notion of reduction, and show that they coincide with the usual Hilbert systems up to provability. Finally, we investigate a categorical semantics which interprets the modality as a product-preserving functor.Comment: Full version of article previously presented at LICS 2017 (see arXiv:1602.04860v4 or doi: 10.1109/LICS.2017.8005089

    The exp-log normal form of types

    Get PDF
    Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest non-trivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the beta-eta-equality of terms---and this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the meta-theoretic results on decidability of the isomorphism. In this paper, we present the exp-log normal form of types---derived from the representation of exponential polynomials via the unary exponential and logarithmic functions---that any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the high-school identities. We then show that the type normal form allows to reduce the standard beta-eta equational theory of the lambda calculus to a specialized version of itself, while preserving the completeness of equality on terms. We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coq-implemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing sophisticated term analyses

    A Logical Foundation for Environment Classifiers

    Full text link
    Taha and Nielsen have developed a multi-stage calculus {\lambda}{\alpha} with a sound type system using the notion of environment classifiers. They are special identifiers, with which code fragments and variable declarations are annotated, and their scoping mechanism is used to ensure statically that certain code fragments are closed and safely runnable. In this paper, we investigate the Curry-Howard isomorphism for environment classifiers by developing a typed {\lambda}-calculus {\lambda}|>. It corresponds to multi-modal logic that allows quantification by transition variables---a counterpart of classifiers---which range over (possibly empty) sequences of labeled transitions between possible worlds. This interpretation will reduce the "run" construct---which has a special typing rule in {\lambda}{\alpha}---and embedding of closed code into other code fragments of different stages---which would be only realized by the cross-stage persistence operator in {\lambda}{\alpha}---to merely a special case of classifier application. {\lambda}|> enjoys not only basic properties including subject reduction, confluence, and strong normalization but also an important property as a multi-stage calculus: time-ordered normalization of full reduction. Then, we develop a big-step evaluation semantics for an ML-like language based on {\lambda}|> with its type system and prove that the evaluation of a well-typed {\lambda}|> program is properly staged. We also identify a fragment of the language, where erasure evaluation is possible. Finally, we show that the proof system augmented with a classical axiom is sound and complete with respect to a Kripke semantics of the logic
    • …
    corecore