17 research outputs found
Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies
We study normalising reduction strategies for infinitary Combinatory
Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and
needed-fair strategies are normalising for orthogonal, fully-extended iCRSs.
These facts properly generalise a number of results on normalising strategies
in first-order infinitary rewriting and provide the first examples of
normalising strategies for infinitary lambda calculus
Static analysis of Martin-Löf's intuitionistic type theory
Martin-Löf's intuitionistic type theory has been under investigation in recent years as a potential source for future functional programming languages. This is due to its properties which greatly aid the derivation of provably correct programs. These include the Curry-Howard correspondence (whereby logical formulas may be seen as specifications and proofs of logical formulas as programs) and strong normalisation (i.e. evaluation of every proof/program must terminate). Unfortunately, a corollary of these properties is that the programs may contain computationally irrelevant proof objects: proofs which are not to be printed as part of the result of a program.
We show how a series of static analyses may be used to improve the efficiency of type theory as a lazy functional programming language. In particular we show how variants of abstract interpretation may be used to eliminate unnecessary computations in the object code that results from a type theoretic program.
After an informal treatment of the application of abstract interpretation to type theory (where we discuss the features of type theory which make it particularly amenable to such an approach), we give formal proofs of correctness of our abstract interpretation techniques, with regard to the semantics of type theory.
We subsequently describe how we have implemented abstract interpretation techniques within the Ferdinand functional language compiler. Ferdinand was developed as a lazy functional programming system by Andrew Douglas at the University of Kent at Canterbury.
Finally, we show how other static analysis techniques may be applied to type theory. Some of these techniques use the abstract interpretation mechanisms previously discussed
Normalisation for Dynamic Pattern Calculi
The Pure Pattern Calculus (PPC) extends the lambda-calculus, as well as the family of algebraic pattern calculi, with first-class patterns; that is, patterns can be passed as arguments, evaluated and returned as results. The notion of matching failure of the PPC not only provides a mechanism to define functions by pattern matching on cases but also supplies PPC with parallel-or-like, non-sequential behaviour. Therefore, devising normalising strategies for PPC to obtain well-behaved implementations turns out to be challenging.
This paper focuses on normalising reduction strategies for PPC. We define a (multistep) strategy and show that it is normalising. The strategy generalises the leftmost-outermost strategy for lambda-calculus and is strictly finer than parallel-outermost. The normalisation proof is based on the notion of necessary set of redexes, a generalisation of the notion of needed redex encompassing
non-sequential reduction systems
A Quantitative Understanding of Pattern Matching
This paper shows that the recent approach to quantitative typing systems for programming languages can be extended to pattern matching features. Indeed, we define two resource-aware type systems, named U and E, for a ?-calculus equipped with pairs for both patterns and terms. Our typing systems borrow some basic ideas from [Antonio Bucciarelli et al., 2015], which characterises (head) normalisation in a qualitative way, in the sense that typability and normalisation coincide. But, in contrast to [Antonio Bucciarelli et al., 2015], our systems also provide quantitative information about the dynamics of the calculus. Indeed, system U provides upper bounds for the length of (head) normalisation sequences plus the size of their corresponding normal forms, while system E, which can be seen as a refinement of system U, produces exact bounds for each of them. This is achieved by means of a non-idempotent intersection type system equipped with different technical tools. First of all, we use product types to type pairs instead of the disjoint unions in [Antonio Bucciarelli et al., 2015], which turn out to be an essential quantitative tool because they remove the confusion between "being a pair" and "being duplicable". Secondly, typing sequents in system E are decorated with tuples of integers, which provide quantitative information about normalisation sequences, notably time (cf. length) and space (cf. size). Moreover, the time resource information is remarkably refined, because it discriminates between different kinds of reduction steps performed during evaluation, so that beta, substitution and matching steps are counted separately. Another key tool of system E is that the type system distinguishes between consuming (contributing to time) and persistent (contributing to space) constructors
An Implementation of the Language Lambda Prolog Organized around Higher-Order Pattern Unification
This thesis concerns the implementation of Lambda Prolog, a higher-order
logic programming language that supports the lambda-tree syntax approach to
representing and manipulating formal syntactic objects. Lambda Prolog achieves
its functionality by extending a Prolog-like language by using typed lambda
terms as data structures that it then manipulates via higher-order unification
and some new program-level abstraction mechanisms. These additional features
raise new implementation questions that must be adequately addressed for Lambda
Prolog to be an effective programming tool. We consider these questions here,
providing eventually a virtual machine and compilation based realization. A key
idea is the orientation of the computation model of Lambda Prolog around a
restricted version of higher-order unification with nice algorithmic properties
and appearing to encompass most interesting applications. Our virtual machine
embeds a treatment of this form of unification within the structure of the
Warren Abstract Machine that is used in traditional Prolog implementations.
Along the way, we treat various auxiliary issues such as the low-level
representation of lambda terms, the implementation of reduction on such terms
and the optimized processing of types in computation. We also develop an actual
implementation of Lambda Prolog called Teyjus Version 2. A characteristic of
this system is that it realizes an emulator for the virtual machine in the C
language a compiler in the OCaml language. We present a treatment of the
software issues that arise from this kind of mixing of languages within one
system and we discuss issues relevant to the portability of our virtual machine
emulator across arbitrary architectures. Finally, we assess the the efficacy of
our various design ideas through experiments carried out using the system