15 research outputs found

    Timing properties and correctness for structured parallel programs on x86-64 multicores

    Get PDF
    This paper determines correctness and timing properties for structured parallel programs on x86-64 multicores. Multicore architectures are increasingly common, but real architectures have unpredictable timing properties, and even correctness is not obvious above the relaxed-memory concurrency models that are enforced by commonly-used hardware. This paper takes a rigorous approach to correctness and timing properties, examining common locking protocols from first principles, and extending this through queues to structured parallel constructs. We prove functional correctness and derive simple timing models, and both extend for the first time from low-level primitives to high-level parallel patterns. Our derived high-level timing models for structured parallel programs allow us to accurately predict upper bounds on program execution times on x86-64 multicores.Postprin

    A wide-spectrum language for verification of programs on weak memory models

    Full text link
    Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification. Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model-checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) run on hardware, and also to model check implementations of data structures from the literature against their abstract specifications

    Making linearizability compositional for partially ordered executions

    Get PDF
    In the interleaving model of concurrency, where events are totally ordered, linearizability is compositional: the composition of two linearizable objects is guaranteed to be linearizable. However, linearizability is not compositional when events are only partially ordered, as in the weak-memory models that describe multicore memory systems. In this paper, we present a generalisation of linearizability for concurrent objects implemented in weak-memory models. We abstract from the details of specific memory models by defining our condition using Lamport’s execution structures. We apply our condition to the C11 memory model, providing a correctness condition for C11 objects. We develop a proof method for verifying objects implemented in C11 and related models. Our method is an adaptation of simulation-based methods, but in contrast to other such methods, it does not require that the implementation totally orders its events. We apply our proof technique and show correctness of the Treiber stack that blocks on empty, annotated with C11 release-acquire synchronisation

    Can We Monitor All Multithreaded Programs?

    Get PDF
    International audienceRuntime Verification (RV) is a lightweight formal method which consists in verifying that an execution of a program is correct wrt a specification. The specification formalizes with properties the expected correct behavior of the system. Programs are instrumented to extract necessary information from the execution and feed it to monitors tasked with checking the properties. From the perspective of a monitor, the system is a black box; the trace is the only system information provided. Parallel programs generally introduce an added level of complexity on the program execution due to concurrency. A concurrent execution of a parallel program is best represented as a partial order. A large number of RV approaches generate monitors using formalisms that rely on total order, while more recent approaches utilize formalisms that consider multiple traces. In this tutorial, we review some of the main RV approaches and tools that handle multithreaded Java programs. We discuss their assumptions, limitations, ex-pressiveness, and suitability when tackling parallel programs such as producer-consumer and readers-writers. By analyzing the interplay between specification formalisms and concurrent executions of programs, we identify four questions RV practitioners may ask themselves to classify and determine the situations in which it is sound to use the existing tools and approaches

    Studying Operational Models of Relaxed Concurrency

    No full text

    Studying Operational Models of Relaxed Concurrency

    No full text

    Operational Semantics of a Weak Memory Model with Channel Synchronization

    No full text
    A multitude of weak memory models exists supporting various types of relaxations and different synchronization primitives. On one hand, they must be lax enough to allow for hardware and compiler optimizations; on the other, the more lax the model, the harder it is to reason about programs. Though the right balance is up for debate, a memory model should provide what is known as the SC-DRF guarantee, meaning that data-race free programs behave in a sequentially consistent manner. In this work we present a weak memory model for a calculus inspired by Go. Thus, different from previous approaches, we focus on a memory model with buffered channel communication as the sole synchronization primitive. While memory models are often defined axiomatically where the notion of a program is abstracted away (often as a graph with memory events as nodes), we formalize our model via an operational semantics. This allows us to prove the SC-DRF guarantee using a standard simulation technique that highlights invariants and gives insight into the how the memory works. Finally, we provide a concrete implementation in K, a rewrite-based executable semantic framework, which allows us to derive an interpreter for the proposed language
    corecore