20 research outputs found

    Practical implementation of a dependently typed functional programming language

    Get PDF
    Types express a program's meaning, and checking types ensures that a program has the intended meaning. In a dependently typed programming language types are predicated on values, leading to the possibility of expressing invariants of a program's behaviour in its type. Dependent types allow us to give more detailed meanings to programs, and hence be more confident of their correctness. This thesis considers the practical implementation of a dependently typed programming language, using the Epigram notation defined by McBride and McKinna. Epigram is a high level notation for dependently typed functional programming elaborating to a core type theory based on Lu๙s UTT, using Dybjer's inductive families and elimination rules to implement pattern matching. This gives us a rich framework for reasoning about programs. However, a naive implementation introduces several run-time overheads since the type system blurs the distinction between types and values; these overheads include the duplication of values, and the storage of redundant information and explicit proofs. A practical implementation of any programming language should be as efficient as possible; in this thesis we see how the apparent efficiency problems of dependently typed programming can be overcome and that in many cases the richer type information allows us to apply optimisations which are not directly available in traditional languages. I introduce three storage optimisations on inductive families; forcing, detagging and collapsing. I further introduce a compilation scheme from the core type theory to G-machine code, including a pattern matching compiler for elimination rules and a compilation scheme for efficient run-time implementation of Peano's natural numbers. We also see some low level optimisations for removal of identity functions, unused arguments and impossible case branches. As a result, we see that a dependent type theory is an effective base on which to build a feasible programming language

    Component library retrieval using property models

    Get PDF
    The re-use of products such as code, specifications, design decisions and documentation has been proposed as a method for increasing software productivity and reliability. A major problem that has still to be adequately solved is the storage and retrieval of re-usable 'components'. Current methods, such as keyword retrieval and catalogues, rely on the use of names to describe components or categories. This is inadequate for all but a few well established components and categories; in the majority of cases names do not convey sufficient information on which to base a decision to retrieve. One approach to this problem is to describe components using a formal specification. However this is impractical for two reasons; firstly, the limitations of theorem proving would severely restrict the complexity of components that could be retrieved and secondly the retrieval mechanism would need to have a method of retrieving components with 'similar' specifications. This thesis proposes the use of formal 'property' models to represent the key functionality of components. Retrieval of components can then take place on the basis of a property model produced by the library's users. These models only describe the key properties of a component, thereby making the task of comparing properties feasible. Views are introduced as a method of relating similar, non identical property models, and the use of these views facilitates the re-use of components with similar properties. The language Miramod has been developed for the purpose of describing components, and a Miramod compiler and property prover which allow Miramod models to be compared for similarity, have been designed and implemented. These tools have indicated that model based component library retrieval is feasible at relatively low levels of the programming process, and future work is suggested to extend the method to encompass earlier stages in the development of large systems

    Abstract-Syntax-Driven Development of Oberon-0 Using YAJCo

    Get PDF
    YAJCo is a tool for the development of software languages based on an annotated language model. The model is represented by Java classes with annotations defining their mapping to concrete syntax. This approach to language definition enables the abstract syntax to be central point of the development process, instead of concrete syntax. In this paper a case study of Oberon-0 programming language development is presented. The study is based on the LTDA Tool Challenge and showcases details of abstract and concrete syntax definition using YAJCo, as well as implementation of name resolution, type checking, model transformation and code generation. The language was implemented in modular fashion to demonstrate language extension mechanisms supported by YAJCo

    Aspects of functional programming

    Get PDF
    This thesis explores the application of functional programming in new areas and its implementation using new technologies. We show how functional languages can be used to implement solutions to problems in fuzzy logic using a number of languages: Haskell, Ginger and Aladin. A compiler for the weakly-typed, lazy language Ginger is developed using Java byte-code as its target code. This is used as the inspiration for an implementation of Aladin, a simple functional language which has two novel features: its primitives are designed to be written in any language, and evaluation is controlled by declaring the strictness of all functions. Efficient denotational and operational semantics are given for this machine and an implementation is devel- oped using these semantics. We then show that by using the advantages of Aladin (simplicity and strictness control) we can employ partial evaluation to achieve con- siderable speed-ups in the running times of Aladin programs

    A parallel functional language compiler for message-passing multicomputers

    Get PDF
    The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects

    A complete proof of the safety of NĂścker's strictness analysis

    Get PDF
    This paper proves correctness of NĂścker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of NĂścker's strictness analysis. Our algorithm SAL is a reformulation of NĂścker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of NĂścker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for NĂścker's strictness analysis in Clean, and also for its use in Haskell

    On the algebraic denotational specifications of programming language semantics

    Get PDF
    Call number: LD2668 .T4 CMSC 1988 S86Master of ScienceComputing and Information Science
    corecore