1,810 research outputs found

    Data refinement for true concurrency

    Get PDF
    The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers) also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Full text link
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    A synchronous program algebra: a basis for reasoning about shared-memory and event-based concurrency

    Get PDF
    This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an instantiation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support for program verification. In rely/guarantee concurrency, programs are specified to guarantee certain behaviours until assumptions about the behaviour of their environment are violated. When assumptions are violated, program behaviour is unconstrained (aborting), and guarantees need no longer hold. To support these guarantees a second synchronous operator, weak conjunction, was introduced: both processes in a weak conjunction must agree to take each atomic step, unless one aborts in which case the whole aborts. In developing the laws for parallel and weak conjunction we found many properties were shared by the operators and that the proofs of many laws were essentially the same. This insight led to the idea of generalising synchronisation to an abstract operator with only the axioms that are shared by the parallel and weak conjunction operator, so that those two operators can be viewed as instantiations of the abstract synchronisation operator. The main differences between parallel and weak conjunction are how they combine individual atomic steps; that is left open in the axioms for the abstract operator.Comment: Extended version of a Formal Methods 2016 paper, "An algebra of synchronous atomic steps

    Fractional permissions and non-deterministic evaluators in interval temporal logic

    Get PDF
    We propose Interval Temporal Logic as a basis for reasoning about concurrent programs with fine-grained atomicity due to the generality it provides over reasoning with standard pre/post-state relations. To simplify the semantics of parallel composition over intervals, we use fractional permissions, which allows one to ensure that conflicting reads and writes to a variable do not occur simultaneously. Using non-deterministic evaluators over intervals, we enable reasoning about the apparent states over an interval, which may differ from the actual states in the interval. The combination of Interval Temporal Logic, non-deterministic evaluators and fractional permissions results in a generic framework for reasoning about concurrent programs with fine-grained atomicity. We use our logic to develop rely/guarantee-style rules for decomposing a proof of a large system into proofs of its subcomponents, where fractional permissions are used to ensure that the behaviours of a program and its environment do not conflict

    Layering Assume-Guarantee Contracts for Hierarchical System Design

    Get PDF
    Specifications for complex engineering systems are typically decomposed into specifications for individual subsystems in a manner that ensures they are implementable and simpler to develop further. We describe a method to algorithmically construct component specifications that implement a given specification when assembled. By eliminating variables that are irrelevant to realizability of each component, we simplify the specifications and reduce the amount of information necessary for operation. We parametrize the information flow between components by introducing parameters that select whether each variable is visible to a component. The decomposition algorithm identifies which variables can be hidden while preserving realizability and ensuring correct composition, and these are eliminated from component specifications by quantification and conversion of binary decision diagrams to formulas. The resulting specifications describe component viewpoints with full information with respect to the remaining variables, which is essential for tractable algorithmic synthesis of implementations. The specifications are written in TLA + , with liveness properties restricted to an implication of conjoined recurrence properties, known as GR(1). We define an operator for forming open systems from closed systems, based on a variant of the “while-plus” operator. This operator simplifies the writing of specifications that are realizable without being vacuous. To convert the generated specifications from binary decision diagrams to readable formulas over integer variables, we symbolically solve a minimal covering problem. We show with examples how the method can be applied to obtain contracts that formalize the hierarchical structure of system design

    Linearizability with Ownership Transfer

    Full text link
    Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type. This is inappropriate for common programming languages, where libraries and their clients can communicate via the heap, transferring the ownership of data structures, and can even run in a shared address space without any memory protection. In this paper, we present the first definition of linearizability that lifts this limitation and establish an Abstraction Theorem: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows abstracting from the details of the library implementation while reasoning about the client. We also prove that linearizability with ownership transfer can be derived from the classical one if the library does not access some of data structures transferred to it by the client
    • 

    corecore