126 research outputs found

    Calculating with lenses: optimising bidirectional transformations

    Get PDF
    This paper presents an equational calculus to reason about bidirectional transformations specified in the point-free style. In particular, it focuses on the so-called lenses as a bidirectional idiom, and shows that many standard laws characterising point-free combinators and recursion patterns are also valid in that setting. A key result is that uniqueness also holds for bidirectional folds and unfolds, thus unleashing the power of fusion as a program optimisation technique. A rewriting system for automatic lens optimisation is also presented, to prove the usefulness of the proposed calculus.(undefined

    An Axiomatic Value Model for Isabelle/UTP

    Get PDF
    The Unifying Theories of Programming (UTP) is a mathematical framework to define, examine and link program semantics for a large variety of computational paradigms. Several mechanisations of the UTP in HOL theorem provers have been developed. All of them, however, succumb to a trade off in how they encode the value model of UTP theories. A deep and unified value model via a universal (data)type incurs restrictions on permissible value types and adds complexity; a shallow value model, directly instantiating HOL types for UTP values, retains simplicity, but sacrifices expressiveness, since we lose the ability to compositionally reason about alphabets and theories. We here propose an alternative solution that axiomatises the value model and retains the advantages of both: while it supports a unified value notion, it allows us to directly inject a closed universe of HOL types into it. We carefully craft a definitional mechanism in Isabelle/HOL that guarantees soundness

    The use of formal methods in parallel operating systems

    Get PDF
    The authors report on the use of formal methods for the development of parallel operating systems for two experimental declarative systems over a five-year period. A common specification approach has evolved as part of the development of these two very different systems: one being for a parallel graph reduction machine and written in a functional language enhanced with state-based objects, the other was written in C++. A brief overview of each system is given before concentrating on the use of formal methods. A description is given of how both a technique for formally specifying sequential systems (VDM) and a technique for specifying concurrent systems (temporal logic) have been used together. In both cases, the issue of verification is addresse

    A Methodology for Protocol Verification Applied to EMV 1

    Get PDF

    Automating C++ Execution Exploration to Solve the Out-of-thin-air Problem

    Get PDF
    Modern computers are marvels of engineering. Customisable reasoning engines which can be programmed to complete complex mathematical tasks at incredible speed. Decades of engineering has taken computers from room sized machines to near invisible devices in all aspects of life. With this engineering has come more complex and ornate design, a substantial leap forward being multiprocessing. Modern processors can execute threads of program logic in parallel, coordinating shared resources like memory and device access. Parallel computation leads to significant scaling of compute power, but yields a substantial complexity cost for both processors designers and programmers. Parallel access to shared memory requires coordination on which thread can use a particular fragment of memory at a given time. Simple mechanisms like locks and mutexes ensure only one process at a time can access memory gives an easy to use programming model, but they eschew the benefits of parallel computation. Instead, processors today have complex mechanisms to permit concurrent shared memory access. These mechanisms prevent simple programmer reasoning and require complex formal descriptions to define: memory models. Early memory model research focused on weak memory behaviours which are observable because of hardware design; over time it has become obvious that not only hardware but compilers are capable of making new weak behaviours observable. Substantial and rapid success has been achieved formalising the behaviour of these machines: researchers refined new specifications for shared-memory concurrency and used mechanisation to automate validation of their models. As the models were refined and new behaviours of the hardware were discovered, researchers also began working with processor vendors – helping to inform design choices in new processor designs to keep the weak behaviours within some sensible bounds. Unfortunately when reasoning about shared memory accesses of highly optimised programming languages like C and C++, deep questions are still left open about how best to describe the behaviour of shared memory accesses in the presence of dependency removing compiler optimisations. Until very recently it has not been possible to properly specify the behaviours of these programs without forbidding ii optimisations which are used and observable, or allowing program behaviours which are nonsense and never observable. In this thesis I explore the development of memory models through the lens of tooling: taking at first an industrial approach, and then exploring memory models for highly optimised programming languages. I show that taming the complexity of these models with automated tools aids bug finding even where formal evaluation has not. Further, building tools creates a focus on the computational complexity of the memory model which in turn can steer development of the model towards simpler designs. We will look at 3 case studies: the first is an industrial hardware model of NVIDIA GPUs which we extend to encompass more hardware features than before. This extension was validated using an automated testing process generating tests of finite size, and then verified against the original memory model in Coq. The second case study is an exploration of the first memory model for an optimised programming language which takes proper account of dependencies. We build a tool to automate execution of this model over a series of tests, and in the process discovered subtleties in the definitions which were unexpected – leading to refinement of the model. In the final case study, we develop a memory model that gives a direct definition for compiler preserved dependencies. This model is the first model that can be integrated with relative ease into the C/C++ programming language standard. We built this model alongside its own tooling, yielding a fast tool for giving determinations on a large number of litmus tests – a novelty for this sort of memory model. This model fits well with the existing C/C++ specifications, and we are working with the International Standards Organisation to understand how best to fit this model in the standard
    • …
    corecore