47 research outputs found

    Implementing and reasoning about hash-consed data structures in Coq

    Get PDF
    We report on four different approaches to implementing hash-consing in Coq programs. The use cases include execution inside Coq, or execution of the extracted OCaml code. We explore the different trade-offs between faithful use of pristine extracted code, and code that is fine-tuned to make use of OCaml programming constructs not available in Coq. We discuss the possible consequences in terms of performances and guarantees. We use the running example of binary decision diagrams and then demonstrate the generality of our solutions by applying them to other examples of hash-consed data structures

    A functional specification of effects

    Get PDF
    This dissertation is about effects and type theory. Functional programming languages such as Haskell illustrate how to encapsulate side effects using monads. Haskell compilers provide a handful of primitive effectful functions. Programmers can construct larger computations using the monadic return and bind operations. These primitive effectful functions, however, have no associated deļ¬nition. At best, their semantics are speciļ¬ed separately on paper. This can make it difļ¬cult to test, debug, verify, or even predict the behaviour of effectful computations. This dissertation provides pure, functional speciļ¬cations in Haskell of several different effects. Using these speciļ¬cations, programmers can test and debug effectful programs. This is particularly useful in tandem with automatic testing tools such as QuickCheck. The speciļ¬cations in Haskell are not total. This makes them unsuitable for the formal veriļ¬cation of effectful functions. This dissertation overcomes this limitation, by presenting total functional speciļ¬cations in Agda, a programming language with dependent types. There have been alternative approaches to incorporating effects in a dependently typed programming language. Most notably, recent work on Hoare Type Theory proposes to extend type theory with axioms that postulate the existence of primitive effectful functions. This dissertation shows how the functional speciļ¬cations implement these axioms, unifying the two approaches. The results presented in this dissertation may be used to write and verify effectful programs in the framework of type theory

    A functional specification of effects

    Get PDF
    This dissertation is about effects and type theory. Functional programming languages such as Haskell illustrate how to encapsulate side effects using monads. Haskell compilers provide a handful of primitive effectful functions. Programmers can construct larger computations using the monadic return and bind operations. These primitive effectful functions, however, have no associated deļ¬nition. At best, their semantics are speciļ¬ed separately on paper. This can make it difļ¬cult to test, debug, verify, or even predict the behaviour of effectful computations. This dissertation provides pure, functional speciļ¬cations in Haskell of several different effects. Using these speciļ¬cations, programmers can test and debug effectful programs. This is particularly useful in tandem with automatic testing tools such as QuickCheck. The speciļ¬cations in Haskell are not total. This makes them unsuitable for the formal veriļ¬cation of effectful functions. This dissertation overcomes this limitation, by presenting total functional speciļ¬cations in Agda, a programming language with dependent types. There have been alternative approaches to incorporating effects in a dependently typed programming language. Most notably, recent work on Hoare Type Theory proposes to extend type theory with axioms that postulate the existence of primitive effectful functions. This dissertation shows how the functional speciļ¬cations implement these axioms, unifying the two approaches. The results presented in this dissertation may be used to write and verify effectful programs in the framework of type theory

    Formalizing Stack Safety as a Security Property

    Get PDF
    The term stack safety is used to describe a variety of compiler, runtime, and hardware mechanisms for protecting stack memory. Unlike ā€œthe heap,ā€ the ISA-level stack does not correspond to a single high-level language concept: different compilers use it in different ways to support procedural and functional abstraction mechanisms from a wide range of languages. This protean nature makes it difficult to nail down what it means to correctly enforce stack safety

    PureCake: A verified compiler for a lazy functional language

    Get PDF
    We present PureCake, a mechanically-verified compiler for PureLang, a lazy, purely functional programming language with monadic effects. PureLang syntax is Haskell-like and indentation-sensitive, and its constraint-based Hindley-Milner type system guarantees safe execution. We derive sound equational reasoning principles over its operational semantics, dramatically simplifying some proofs. We prove end-to-end correctness for the compilation of PureLang down to machine code---the first such result for any lazy language---by targeting CakeML and composing with its verified compiler. Multiple optimisation passes are necessary to handle realistic lazy idioms effectively. We develop PureCake entirely within the HOL4 interactive theorem prover

    Design and implementation of an array language for computational science on a heterogeneous multicore architecture

    Get PDF
    The packing of multiple processor cores onto a single chip has become a mainstream solution to fundamental physical issues relating to the microscopic scales employed in the manufacture of semiconductor components. Multicore architectures provide lower clock speeds per core, while aggregate floating-point capability continues to increase. Heterogeneous multicore chips, such as the Cell Broadband Engine (CBE) and modern graphics chips, also address the related issue of an increasing mismatch between high processor speeds, and huge latency to main memory. Such chips tackle this memory wall by the provision of addressable caches; increased bandwidth to main memory; and fast thread context switching. An associated cost is often reduced functionality of the individual accelerator cores; and the increased complexity involved in their programming. This dissertation investigates the application of a programming language supporting the first-class use of arrays; and capable of automatically parallelising array expressions; to the heterogeneous multicore domain of the CBE, as found in the Sony PlayStation 3 (PS3). The language is a pre-existing and well-documented proper subset of Fortran, known as the ā€˜Fā€™ programming language. A bespoke compiler, referred to as E , is developed to support this aim, and written in the Haskell programming language. The output of the compiler is in an extended C++ dialect known as Offload C++, which targets the PS3. A significant feature of this language is its use of multiple, statically typed, address spaces. By focusing on generic, polymorphic interfaces for both the generated and hand constructed code, a number of interesting design patterns relating to the memory locality are introduced. A suite of medium-sized (100-700 lines), real-world benchmark programs are used to evaluate the performance, correctness, and scalability of the compiler technology. Absolute speedup values, well in excess of one, are observed for all of the programs. The work ultimately demonstrates that an array language can significantly reduce the effort expended to utilise a parallel heterogeneous multicore architecture, while retaining high performance. A substantial, related advantage in using standard ā€˜Fā€™ is that any Fortran compiler can create debuggable, and competitively performing serial programs

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Profiling large-scale lazy functional programs

    Get PDF
    The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program
    corecore