515 research outputs found

    Tracking Data-Flow with Open Closure Types

    Get PDF
    Type systems hide data that is captured by function closures in function types. In most cases this is a beneficial design that favors simplicity and compositionality. However, some applications require explicit information about the data that is captured in closures. This paper introduces open closure types, that is, function types that are decorated with type contexts. They are used to track data-flow from the environment into the function closure. A simply-typed lambda calculus is used to study the properties of the type theory of open closure types. A distinctive feature of this type theory is that an open closure type of a function can vary in different type contexts. To present an application of the type theory, it is shown that a type derivation establishes a simple non-interference property in the sense of information-flow theory. A publicly available prototype implementation of the system can be used to experiment with type derivations for example programs.Comment: Logic for Programming Artificial Intelligence and Reasoning (2013

    Verified Compilers for a Multi-Language World

    Get PDF
    Though there has been remarkable progress on formally verified compilers in recent years, most of these compilers suffer from a serious limitation: they are proved correct under the assumption that they will only be used to compile whole programs. This is an unrealistic assumption since most software systems today are comprised of components written in different languages - both typed and untyped - compiled by different compilers to a common target, as well as low-level libraries that may be handwritten in the target language. We are pursuing a new methodology for building verified compilers for today\u27s world of multi-language software. The project has two central themes, both of which stem from a view of compiler correctness as a language interoperability problem. First, to specify correctness of component compilation, we require that if a source component s compiles to target component t, then t linked with some arbitrary target code t\u27 should behave the same as s interoperating with t\u27. The latter demands a formal semantics of interoperability between the source and target languages. Second, to enable safe interoperability between components compiled from languages as different as ML, Rust, Python, and C, we plan to design a gradually type-safe target language based on LLVM that supports safe interoperability between more precisely typed, less precisely typed, and type-unsafe components. Our approach opens up a new avenue for exploring sensible language interoperability while also tackling compiler correctness

    New Equations for Neutral Terms: A Sound and Complete Decision Procedure, Formalized

    Get PDF
    The definitional equality of an intensional type theory is its test of type compatibility. Today's systems rely on ordinary evaluation semantics to compare expressions in types, frustrating users with type errors arising when evaluation fails to identify two `obviously' equal terms. If only the machine could decide a richer theory! We propose a way to decide theories which supplement evaluation with `ν\nu-rules', rearranging the neutral parts of normal forms, and report a successful initial experiment. We study a simple -calculus with primitive fold, map and append operations on lists and develop in Agda a sound and complete decision procedure for an equational theory enriched with monoid, functor and fusion laws

    Modular, Fully-abstract Compilation by Approximate Back-translation

    Full text link
    A compiler is fully-abstract if the compilation from source language programs to target language programs reflects and preserves behavioural equivalence. Such compilers have important security benefits, as they limit the power of an attacker interacting with the program in the target language to that of an attacker interacting with the program in the source language. Proving compiler full-abstraction is, however, rather complicated. A common proof technique is based on the back-translation of target-level program contexts to behaviourally-equivalent source-level contexts. However, constructing such a back- translation is problematic when the source language is not strong enough to embed an encoding of the target language. For instance, when compiling from STLC to ULC, the lack of recursive types in the former prevents such a back-translation. We propose a general and elegant solution for this problem. The key insight is that it suffices to construct an approximate back-translation. The approximation is only accurate up to a certain number of steps and conservative beyond that, in the sense that the context generated by the back-translation may diverge when the original would not, but not vice versa. Based on this insight, we describe a general technique for proving compiler full-abstraction and demonstrate it on a compiler from STLC to ULC. The proof uses asymmetric cross-language logical relations and makes innovative use of step-indexing to express the relation between a context and its approximate back-translation. The proof extends easily to common compiler patterns such as modular compilation and it, to the best of our knowledge, it is the first compiler full abstraction proof to have been fully mechanised in Coq. We believe this proof technique can scale to challenging settings and enable simpler, more scalable proofs of compiler full-abstraction

    Relating Two Semantics of Locally Scoped Names

    Get PDF
    The operational semantics of programming constructs involving locally scoped names typically makes use of stateful "dynamic allocation": a set of currently-used names forms part of the state and upon entering a scope the set is augmented by a new name bound to the scoped identifier. More abstractly, one can see this as a transformation of local scopes by expanding them outward to an implicit top-level. By contrast, in a neglected paper from 1994, Odersky gave a stateless lambda calculus with locally scoped names whose dynamics contracts scopes inward. The properties of "Odersky-style" local names are quite different from dynamically allocated ones and it has not been clear, until now, what is the expressive power of Odersky\u27s notion. We show that in fact it provides a direct semantics of locally scoped names from which the more familiar dynamic allocation semantics can be obtained by continuation-passing style (CPS) translation. More precisely, we show that there is a CPS translation of typed lambda calculus with dynamically allocated names (the Pitts-Stark nu-calculus) into Odersky\u27s lambda-nu-calculus which is computationally adequate with respect to observational equivalence in the two calculi

    The Safe Lambda Calculus

    Full text link
    Safety is a syntactic condition of higher-order grammars that constrains occurrences of variables in the production rules according to their type-theoretic order. In this paper, we introduce the safe lambda calculus, which is obtained by transposing (and generalizing) the safety condition to the setting of the simply-typed lambda calculus. In contrast to the original definition of safety, our calculus does not constrain types (to be homogeneous). We show that in the safe lambda calculus, there is no need to rename bound variables when performing substitution, as variable capture is guaranteed not to happen. We also propose an adequate notion of beta-reduction that preserves safety. In the same vein as Schwichtenberg's 1976 characterization of the simply-typed lambda calculus, we show that the numeric functions representable in the safe lambda calculus are exactly the multivariate polynomials; thus conditional is not definable. We also give a characterization of representable word functions. We then study the complexity of deciding beta-eta equality of two safe simply-typed terms and show that this problem is PSPACE-hard. Finally we give a game-semantic analysis of safety: We show that safe terms are denoted by `P-incrementally justified strategies'. Consequently pointers in the game semantics of safe lambda-terms are only necessary from order 4 onwards

    A type checking algorithm for qualified session types

    Get PDF
    We present a type checking algorithm for establishing a session-based discipline in the pi calculus of Milner, Parrow and Walker. Our session types are qualified as linear or unrestricted. Linearly typed communication channels are guaranteed to occur in exactly one thread, possibly multiple times; afterwards they evolve as unrestricted channels. Session protocols are described by a type constructor that denotes the two ends of one and the same communication channel. We ensure the soundness of the algorithm by showing that processes consuming all linear resources are accepted by a type system preserving typings during the computation and that type checking is consistent w.r.t. structural congruence.Comment: In Proceedings WWV 2011, arXiv:1108.208

    Observed communication semantics for classical processes

    Get PDF
    Classical Linear Logic (CLL) has long inspired readings of its proofs as communicating processes. Wadler's CP calculus is one of these readings. Wadler gave CP an operational semantics by selecting a subset of the cut-elimination rules of CLL to use as reduction rules. This semantics has an appealing close connection to the logic, but does not resolve the status of the other cut-elimination rules, and does not admit an obvious notion of observational equivalence. We propose a new operational semantics for CP based on the idea of observing communication, and use this semantics to define an intuitively reasonable notion of observational equivalence. To reason about observational equivalence, we use the standard relational denotational semantics of CLL. We show that this denotational semantics is adequate for our operational semantics. This allows us to deduce that, for instance, all the cut-elimination rules of CLL are observational equivalences
    • …
    corecore