237 research outputs found

    Deciding subset relationship of co-inductively defined set constants

    Get PDF
    Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages

    Proof-Theoretic Methods for Analysis of Functional Programs

    Get PDF
    We investigate how, in a natural deduction setting, we can specify concisely a wide variety of tasks that manipulate programs as data objects. This study will provide us with a better understanding of various kinds of manipulations of programs and also an operational understanding of numerous features and properties of a rich functional programming language. We present a technique, inspired by structural operational semantics and natural semantics, for specifying properties of, or operations on, programs. Specifications of this sort are presented as sets of inference rules and are encoded as clauses in a higher-order, intuitionistic meta-logic. Program properties are then proved by constructing proofs in this meta-logic. We argue the following points regarding these specifications and their proofs: (i) the specifications are clear and concise and they provide intuitive descriptions of the properties being described; (ii) a wide variety of program analysis tools can be specified in a single unified framework, and thus we can investigate and understand the relationship between various tools; (iii) proof theory provides a well-established and formal setting in which to examine meta-theoretic properties of these specifications; and (iv) the meta-logic we use can be implemented naturally in an extended logic programming language and thus we can produce experimental implementations of the specifications. We expect that our efforts will provide new perspectives and insights for many program manipulation tasks

    A complete proof of the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of NĂścker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of NĂścker's strictness analysis. Our algorithm SAL is a reformulation of NĂścker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of NĂścker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for NĂścker's strictness analysis in Clean, and also for its use in Haskell

    Contextual Equivalences in Call-by-Need and Call-By-Name Polymorphically Typed Calculi (Preliminary Report)

    Get PDF
    This paper presents a call-by-need polymorphically typed lambda-calculus with letrec, case, constructors and seq. The typing of the calculus is modelled in a system-F style. Contextual equivalence is used as semantics of expressions. We also define a call-by-name variant without letrec. We adapt several tools and criteria for recognizing correct program transformations to polymorphic typing, in particular an inductive applicative simulation

    A Family Of Syntactic Logical Relations For The Semantics Of Haskell-Like Languages

    Get PDF
    Logical relations are a fundamental and powerful tool for reasoning about programs in languages with parametric polymorphism. Logical relations suitable for reasoning about observational behavior in polymorphic calculi supporting various programming language features have been introduced in recent years. Unfortunately, the calculi studied are typically idealized, and the results obtained for them over only partial insight into the impact of such features on observational behavior in implemented languages. In this paper we show how to bring reasoning via logical relations closer to bear on real languages by deriving results that are more pertinent to an intermediate language for the (mostly) lazy functional language Haskell like GHC Core. To provide a more ?ne-grained analysis of program behavior than is possible by reasoning about program equivalence alone, we work with an abstract notion of relating observational behavior of computations which has among its specializations both observational equivalence and observational approximation. We take selective strictness into account, and we consider the impact of different kinds of computational failure, e.g., divergence versus failed pattern matching, because such distinctions are significant in practice. Once distinguished, the relative de?nedness of different failure causes needs to be considered, because different orders here induce different observational relations on programs (including the choice between equivalence and approximation). Our main contribution is the construction of an entire family of logical relations, parameterized over a definedness order on failure causes, each member of which characterizes the corresponding observational relation. Although we deal with properties very much tied to types, we base our results on a type-erasing semantics since this is more faithful to actual implementations

    Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq

    Full text link
    This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure

    On the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis

    On generic context lemmas for lambda calculi with sharing

    Get PDF
    This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met

    Simulation in the call-by-need lambda-calculus with letrec

    Get PDF
    This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen
    • …
    corecore