2,336 research outputs found

    A Foundational View on Integration Problems

    Full text link
    The integration of reasoning and computation services across system and language boundaries is a challenging problem of computer science. In this paper, we use integration for the scenario where we have two systems that we integrate by moving problems and solutions between them. While this scenario is often approached from an engineering perspective, we take a foundational view. Based on the generic declarative language MMT, we develop a theoretical framework for system integration using theories and partial theory morphisms. Because MMT permits representations of the meta-logical foundations themselves, this includes integration across logics. We discuss safe and unsafe integration schemes and devise a general form of safe integration

    A complete proof of the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of NĂścker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of NĂścker's strictness analysis. Our algorithm SAL is a reformulation of NĂścker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of NĂścker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for NĂścker's strictness analysis in Clean, and also for its use in Haskell

    On the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis

    Proof-Theoretic Methods for Analysis of Functional Programs

    Get PDF
    We investigate how, in a natural deduction setting, we can specify concisely a wide variety of tasks that manipulate programs as data objects. This study will provide us with a better understanding of various kinds of manipulations of programs and also an operational understanding of numerous features and properties of a rich functional programming language. We present a technique, inspired by structural operational semantics and natural semantics, for specifying properties of, or operations on, programs. Specifications of this sort are presented as sets of inference rules and are encoded as clauses in a higher-order, intuitionistic meta-logic. Program properties are then proved by constructing proofs in this meta-logic. We argue the following points regarding these specifications and their proofs: (i) the specifications are clear and concise and they provide intuitive descriptions of the properties being described; (ii) a wide variety of program analysis tools can be specified in a single unified framework, and thus we can investigate and understand the relationship between various tools; (iii) proof theory provides a well-established and formal setting in which to examine meta-theoretic properties of these specifications; and (iv) the meta-logic we use can be implemented naturally in an extended logic programming language and thus we can produce experimental implementations of the specifications. We expect that our efforts will provide new perspectives and insights for many program manipulation tasks

    Variable elimination for building interpreters

    Get PDF
    In this paper, we build an interpreter by reusing host language functions instead of recoding mechanisms of function application that are already available in the host language (the language which is used to build the interpreter). In order to transform user-defined functions into host language functions we use combinatory logic : lambda-abstractions are transformed into a composition of combinators. We provide a mechanically checked proof that this step is correct for the call-by-value strategy with imperative features.Comment: 33 page

    Modular, higher order cardinality analysis in theory and practice

    Get PDF
    Since the mid '80s, compiler writers for functional languages (especially lazy ones) have been writing papers about identifying and exploiting thunks and lambdas that are used only once. However, it has proved difficult to achieve both power and simplicity in practice. In this paper, we describe a new, modular analysis for a higher order language, which is both simple and effective. We prove the analysis sound with respect to a standard call-by-need semantics, and present measurements of its use in a full-scale, state-of-the-art optimising compiler. The analysis finds many single-entry thunks and one-shot lambdas and enables a number of program optimisations. This paper extends our preceding conference publication (Sergey et al. 2014 Proceedings of the 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2014). ACM, pp. 335–348) with proofs, expanded report on evaluation and a detailed examination of the factors causing the loss of precision in the analysis
    • …
    corecore