1,099 research outputs found

    Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq

    Full text link
    This paper shows equivalence of several versions of applicative similarity and contextual approximation, and hence also of applicative bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seq-operator. LR models an untyped version of the core language of Haskell. The use of bisimilarities simplifies equivalence proofs in calculi and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of our similarities and contextual approximation can be shown by Howe's method. Similarity is transferred back to LR on the basis of an inductively defined similarity. The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite trees which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure

    Simulation in the call-by-need lambda-calculus with letrec

    Get PDF
    This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen

    Cyclic Datatypes modulo Bisimulation based on Second-Order Algebraic Theories

    Full text link
    Cyclic data structures, such as cyclic lists, in functional programming are tricky to handle because of their cyclicity. This paper presents an investigation of categorical, algebraic, and computational foundations of cyclic datatypes. Our framework of cyclic datatypes is based on second-order algebraic theories of Fiore et al., which give a uniform setting for syntax, types, and computation rules for describing and reasoning about cyclic datatypes. We extract the "fold" computation rules from the categorical semantics based on iteration categories of Bloom and Esik. Thereby, the rules are correct by construction. We prove strong normalisation using the General Schema criterion for second-order computation rules. Rather than the fixed point law, we particularly choose Bekic law for computation, which is a key to obtaining strong normalisation. We also prove the property of "Church-Rosser modulo bisimulation" for the computation rules. Combining these results, we have a remarkable decidability result of the equational theory of cyclic data and fold.Comment: 38 page

    Towards Automated Reasoning in Herbrand Structures

    Get PDF
    Herbrand structures have the advantage, computationally speaking, of being guided by the definability of all elements in them. A salient feature of the logics induced by them is that they internally exhibit the induction scheme, thus providing a congenial, computationally-oriented framework for formal inductive reasoning. Nonetheless, their enhanced expressivity renders any effective proof system for them incomplete. Furthermore, the fact that they are not compact poses yet another prooftheoretic challenge. This paper offers several layers for coping with the inherent incompleteness and non-compactness of these logics. First, two types of infinitary proof system are introduced—one of infinite width and one of infinite height—which manipulate infinite sequents and are sound and complete for the intended semantics. The restriction of these systems to finite sequents induces a completeness result for finite entailments. Then, in search of effectiveness, two finite approximations of these systems are presented and explored. Interestingly, the approximation of the infinite-width system via an explicit induction scheme turns out to be weaker than the effective cyclic fragment of the infinite-height system

    Integrating a Global Induction Mechanism into a Sequent Calculus

    Full text link
    Most interesting proofs in mathematics contain an inductive argument which requires an extension of the LK-calculus to formalize. The most commonly used calculi for induction contain a separate rule or axiom which reduces the valid proof theoretic properties of the calculus. To the best of our knowledge, there are no such calculi which allow cut-elimination to a normal form with the subformula property, i.e. every formula occurring in the proof is a subformula of the end sequent. Proof schemata are a variant of LK-proofs able to simulate induction by linking proofs together. There exists a schematic normal form which has comparable proof theoretic behaviour to normal forms with the subformula property. However, a calculus for the construction of proof schemata does not exist. In this paper, we introduce a calculus for proof schemata and prove soundness and completeness with respect to a fragment of the inductive arguments formalizable in Peano arithmetic.Comment: 16 page

    lim+, delta+, and Non-Permutability of beta-Steps

    Get PDF
    Using a human-oriented formal example proof of the (lim+) theorem, i.e. that the sum of limits is the limit of the sum, which is of value for reference on its own, we exhibit a non-permutability of beta-steps and delta+-steps (according to Smullyan's classification), which is not visible with non-liberalized delta-rules and not serious with further liberalized delta-rules, such as the delta++-rule. Besides a careful presentation of the search for a proof of (lim+) with several pedagogical intentions, the main subject is to explain why the order of beta-steps plays such a practically important role in some calculi.Comment: ii + 36 page
    • 

    corecore