9,845 research outputs found

    A standardisation proof for algebraic pattern calculi

    Full text link
    This work gives some insights and results on standardisation for call-by-name pattern calculi. More precisely, we define standard reductions for a pattern calculus with constructor-based data terms and patterns. This notion is based on reduction steps that are needed to match an argument with respect to a given pattern. We prove the Standardisation Theorem by using the technique developed by Takahashi and Crary for lambda-calculus. The proof is based on the fact that any development can be specified as a sequence of head steps followed by internal reductions, i.e. reductions in which no head steps are involved.Comment: In Proceedings HOR 2010, arXiv:1102.346

    Developments in the rewriting calculus

    No full text
    The theory of developments, originally developed for the Lambda calculus, has been successfully adapted to several other computational paradigms, like first- and higher-order term rewrite system. The main desirable results on developments are the fact that the complete development of a finite set of redexes always terminates (FD) and the fact that, for a given initial term, all complete developments of a fixed set of redexes end with the same term (FD!). Following the ideas in the Lambda calculus, in this paper, we present a notion of development and the proofs of theorems FD and FD! for the rewriting calculus, a framework embedding Lambda calculus and rewriting capabilities, by allowing abstraction not only on variables but also on patterns. As an additional contribution, a new proof of the confluence property for the rewriting calculus, is obtained as a consequence of the results on developments

    Normalisation for Dynamic Pattern Calculi

    Get PDF
    The Pure Pattern Calculus (PPC) extends the lambda-calculus, as well as the family of algebraic pattern calculi, with first-class patterns; that is, patterns can be passed as arguments, evaluated and returned as results. The notion of matching failure of the PPC not only provides a mechanism to define functions by pattern matching on cases but also supplies PPC with parallel-or-like, non-sequential behaviour. Therefore, devising normalising strategies for PPC to obtain well-behaved implementations turns out to be challenging. This paper focuses on normalising reduction strategies for PPC. We define a (multistep) strategy and show that it is normalising. The strategy generalises the leftmost-outermost strategy for lambda-calculus and is strictly finer than parallel-outermost. The normalisation proof is based on the notion of necessary set of redexes, a generalisation of the notion of needed redex encompassing non-sequential reduction systems

    Lambda Dependency-Based Compositional Semantics

    Full text link
    This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus

    On Constructor Rewrite Systems and the Lambda Calculus

    Full text link
    We prove that orthogonal constructor term rewrite systems and lambda-calculus with weak (i.e., no reduction is allowed under the scope of a lambda-abstraction) call-by-value reduction can simulate each other with a linear overhead. In particular, weak call-by- value beta-reduction can be simulated by an orthogonal constructor term rewrite system in the same number of reduction steps. Conversely, each reduction in a term rewrite system can be simulated by a constant number of beta-reduction steps. This is relevant to implicit computational complexity, because the number of beta steps to normal form is polynomially related to the actual cost (that is, as performed on a Turing machine) of normalization, under weak call-by-value reduction. Orthogonal constructor term rewrite systems and lambda-calculus are thus both polynomially related to Turing machines, taking as notion of cost their natural parameters.Comment: 27 pages. arXiv admin note: substantial text overlap with arXiv:0904.412

    Repetitive Reduction Patterns in Lambda Calculus with letrec (Work in Progress)

    Full text link
    For the lambda-calculus with letrec we develop an optimisation, which is based on the contraction of a certain class of 'future' (also: virtual) redexes. In the implementation of functional programming languages it is common practice to perform beta-reductions at compile time whenever possible in order to produce code that requires fewer reductions at run time. This is, however, in principle limited to redexes and created redexes that are 'visible' (in the sense that they can be contracted without the need for unsharing), and cannot generally be extended to redexes that are concealed by sharing constructs such as letrec. In the case of recursion, concealed redexes become visible only after unwindings during evaluation, and then have to be contracted time and again. We observe that in some cases such redexes exhibit a certain form of repetitive behaviour at run time. We describe an analysis for identifying binders that give rise to such repetitive reduction patterns, and eliminate them by a sort of predictive contraction. Thereby these binders are lifted out of recursive positions or eliminated altogether, as a result alleviating the amount of beta-reductions required for each recursive iteration. Both our analysis and simplification are suitable to be integrated into existing compilers for functional programming languages as an additional optimisation phase. With this work we hope to contribute to increasing the efficiency of executing programs written in such languages.Comment: In Proceedings TERMGRAPH 2011, arXiv:1102.226

    Lambda Calculus in Core Aldwych

    Get PDF
    Core Aldwych is a simple model for concurrent computation, involving the concept of agents which communicate through shared variables. Each variable will have exactly one agent that can write to it, and its value can never be changed once written, but a value can contain further variables which are written to later. A key aspect is that the reader of a value may become the writer of variables in it. In this paper we show how this model can be used to encode lambda calculus. Individual function applications can be explicitly encoded as lazy or not, as required. We then show how this encoding can be extended to cover functions which manipulate mutable variables, but with the underlying Core Aldwych implementation still using only immutable variables. The ordering of function applications then becomes an issue, with Core Aldwych able to model either the enforcement of an ordering or the retention of indeterminate ordering, which allows parallel execution

    Rewriting Modulo \beta in the \lambda\Pi-Calculus Modulo

    Full text link
    The lambda-Pi-calculus Modulo is a variant of the lambda-calculus with dependent types where beta-conversion is extended with user-defined rewrite rules. It is an expressive logical framework and has been used to encode logics and type systems in a shallow way. Basic properties such as subject reduction or uniqueness of types do not hold in general in the lambda-Pi-calculus Modulo. However, they hold if the rewrite system generated by the rewrite rules together with beta-reduction is confluent. But this is too restrictive. To handle the case where non confluence comes from the interference between the beta-reduction and rewrite rules with lambda-abstraction on their left-hand side, we introduce a notion of rewriting modulo beta for the lambda-Pi-calculus Modulo. We prove that confluence of rewriting modulo beta is enough to ensure subject reduction and uniqueness of types. We achieve our goal by encoding the lambda-Pi-calculus Modulo into Higher-Order Rewrite System (HRS). As a consequence, we also make the confluence results for HRSs available for the lambda-Pi-calculus Modulo.Comment: In Proceedings LFMTP 2015, arXiv:1507.0759
    • …
    corecore