7 research outputs found

    Some Challenges of Specifying Concurrent Program Components

    Full text link
    The purpose of this paper is to address some of the challenges of formally specifying components of shared-memory concurrent programs. The focus is to provide an abstract specification of a component that is suitable for use both by clients of the component and as a starting point for refinement to an implementation of the component. We present some approaches to devising specifications, investigating different forms suitable for different contexts. We examine handling atomicity of access to data structures, blocking operations and progress properties, and transactional operations that may fail and need to be retried.Comment: In Proceedings Refine 2018, arXiv:1810.0873

    A wide-spectrum language for verification of programs on weak memory models

    Full text link
    Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification. Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model-checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) run on hardware, and also to model check implementations of data structures from the literature against their abstract specifications

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Full text link
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Get PDF
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    Mechanising an algebraic rely-guarantee refinement calculus

    Get PDF
    PhD ThesisDespite rely-guarantee (RG) being a well-studied program logic established in the 1980s, it was not until recently that researchers realised that rely and guarantee conditions could be treated as independent programming constructs. This recent reformulation of RG paved the way to algebraic characterisations which have helped to better understand the difficulties that arise in the practical application of this development approach. The primary focus of this thesis is to provide automated tool support for a rely-guarantee refinement calculus proposed by Hayes et. al., where rely and guarantee are defined as independent commands. Our motivation is to investigate the application of an algebraic approach to derive concrete examples using this calculus. In the course of this thesis, we locate and fix a few issues involving the refinement language, its operational semantics and preexisting proofs. Moreover, we extend the refinement calculus of Hayes et. al. to cover indexed parallel composition, non-atomic evaluation of expressions within specifications, and assignment to indexed arrays. These extensions are illustrated via concrete examples. Special attention is given to design decisions that simplify the application of the mechanised theory. For example, we leave part of the design of the expression language on the hands of the user, at the cost of the requiring the user to define the notion of undefinedness for unary and binary operators; and we also formalise a notion of indexed parallelism that is parametric on the type of the indexes, this is done deliberately to simplify the formalisation of algorithms. Additionally, we use stratification to reduce the number of cases in in simulation proofs involving the operational semantics. Finally, we also use the algebra to discuss the role of types in program derivation
    corecore