39 research outputs found

    A concurrency semantics for relaxed atomics that permits optimisation and avoids thin-air executions

    Get PDF
    Copyright is held by the owner/author(s). Despite much research on concurrent programming languages, especially for Java and C/C++, we still do not have a satisfactory definition of their semantics, one that admits all common optimisations without also admitting undesired behaviour. Especially problematic are the "thin-Air" examples involving high-performance concurrent accesses, such as C/C++11 relaxed atomics. The C/C++11 model is in a per-candidate-execution style, and previous work has identified a tension between that and the fact that compiler optimisations do not operate over single candidate executions in isolation; rather, they operate over syntactic representations that represent all executions. In this paper we propose a novel approach that circumvents this difficulty. We define a concurrency semantics for a core calculus, including relaxed-Atomic and non-Atomic accesses, and locks, that admits a wide range of optimisation while still forbidding the classic thin-Air examples. It also addresses other problems relating to undefined behaviour. The basic idea is to use an event-structure representation of the current state of each thread, capturing all of its potential executions, and to permit interleaving of execution and transformation steps over that to reflect optimisation (possibly dynamic) of the code. These are combined with a non-multi-copy-Atomic storage subsystem, to reflect common hardware behaviour. The semantics is defined in a mechanised and executable form, and designed to be implementable above current relaxed hardware and strong enough to support the programming idioms that C/C++11 does for this fragment. It offers a potential way forward for concurrent programming language semantics, beyond the current C/C++11 and Java models.This work was partly funded by the EPSRC Programme Grant REMS: Rigorous Engineering for Mainstream Systems, EP/K008528/

    On Thin Air Reads: Towards an Event Structures Model of Relaxed Memory

    Full text link
    To model relaxed memory, we propose confusion-free event structures over an alphabet with a justification relation. Executions are modeled by justified configurations, where every read event has a justifying write event. Justification alone is too weak a criterion, since it allows cycles of the kind that result in so-called thin-air reads. Acyclic justification forbids such cycles, but also invalidates event reorderings that result from compiler optimizations and dynamic instruction scheduling. We propose the notion of well-justification, based on a game-like model, which strikes a middle ground. We show that well-justified configurations satisfy the DRF theorem: in any data-race free program, all well-justified configurations are sequentially consistent. We also show that rely-guarantee reasoning is sound for well-justified configurations, but not for justified configurations. For example, well-justified configurations are type-safe. Well-justification allows many, but not all reorderings performed by relaxed memory. In particular, it fails to validate the commutation of independent reads. We discuss variations that may address these shortcomings

    Automating C++ Execution Exploration to Solve the Out-of-thin-air Problem

    Get PDF
    Modern computers are marvels of engineering. Customisable reasoning engines which can be programmed to complete complex mathematical tasks at incredible speed. Decades of engineering has taken computers from room sized machines to near invisible devices in all aspects of life. With this engineering has come more complex and ornate design, a substantial leap forward being multiprocessing. Modern processors can execute threads of program logic in parallel, coordinating shared resources like memory and device access. Parallel computation leads to significant scaling of compute power, but yields a substantial complexity cost for both processors designers and programmers. Parallel access to shared memory requires coordination on which thread can use a particular fragment of memory at a given time. Simple mechanisms like locks and mutexes ensure only one process at a time can access memory gives an easy to use programming model, but they eschew the benefits of parallel computation. Instead, processors today have complex mechanisms to permit concurrent shared memory access. These mechanisms prevent simple programmer reasoning and require complex formal descriptions to define: memory models. Early memory model research focused on weak memory behaviours which are observable because of hardware design; over time it has become obvious that not only hardware but compilers are capable of making new weak behaviours observable. Substantial and rapid success has been achieved formalising the behaviour of these machines: researchers refined new specifications for shared-memory concurrency and used mechanisation to automate validation of their models. As the models were refined and new behaviours of the hardware were discovered, researchers also began working with processor vendors – helping to inform design choices in new processor designs to keep the weak behaviours within some sensible bounds. Unfortunately when reasoning about shared memory accesses of highly optimised programming languages like C and C++, deep questions are still left open about how best to describe the behaviour of shared memory accesses in the presence of dependency removing compiler optimisations. Until very recently it has not been possible to properly specify the behaviours of these programs without forbidding ii optimisations which are used and observable, or allowing program behaviours which are nonsense and never observable. In this thesis I explore the development of memory models through the lens of tooling: taking at first an industrial approach, and then exploring memory models for highly optimised programming languages. I show that taming the complexity of these models with automated tools aids bug finding even where formal evaluation has not. Further, building tools creates a focus on the computational complexity of the memory model which in turn can steer development of the model towards simpler designs. We will look at 3 case studies: the first is an industrial hardware model of NVIDIA GPUs which we extend to encompass more hardware features than before. This extension was validated using an automated testing process generating tests of finite size, and then verified against the original memory model in Coq. The second case study is an exploration of the first memory model for an optimised programming language which takes proper account of dependencies. We build a tool to automate execution of this model over a series of tests, and in the process discovered subtleties in the definitions which were unexpected – leading to refinement of the model. In the final case study, we develop a memory model that gives a direct definition for compiler preserved dependencies. This model is the first model that can be integrated with relative ease into the C/C++ programming language standard. We built this model alongside its own tooling, yielding a fast tool for giving determinations on a large number of litmus tests – a novelty for this sort of memory model. This model fits well with the existing C/C++ specifications, and we are working with the International Standards Organisation to understand how best to fit this model in the standard

    Bounding data races in space and time

    Get PDF
    © 2018 ACM. We propose a new semantics for shared-memory parallel programs that gives strong guarantees even in the presence of data races. Our local data race freedom property guarantees that all data-race-free portions of programs exhibit sequential semantics. We provide a straightforward operational semantics and an equivalent axiomatic model, and evaluate an implementation for the OCaml programming language. Our evaluation demonstrates that it is possible to balance a comprehensible memory model with a reasonable (no overhead on x86, ∼0.6% on ARM) sequential performance trade-off in a mainstream programming language

    Overhauling SC atomics in C11 and OpenCL

    Get PDF
    Despite the conceptual simplicity of sequential consistency (SC), the semantics of SC atomic operations and fences in the C11 and OpenCL memory models is subtle, leading to convoluted prose descriptions that translate to complex axiomatic formalisations. We conduct an overhaul of SC atomics in C11, reducing the associated axioms in both number and complexity. A consequence of our simplification is that the SC operations in an execution no longer need to be totally ordered. This relaxation enables, for the first time, efficient and exhaustive simulation of litmus tests that use SC atomics. We extend our improved C11 model to obtain the first rigorous memory model formalisation for OpenCL (which extends C11 with support for heterogeneous many-core programming). In the OpenCL setting, we refine the SC axioms still further to give a sensible semantics to SC operations that employ a ‘memory scope’ to restrict their visibility to specific threads. Our overhaul requires slight strengthenings of both the C11 and the OpenCL memory models, causing some behaviours to become disallowed. We argue that these strengthenings are natural, and that all of the formalised C11 and OpenCL compilation schemes of which we are aware (Power and x86 CPUs for C11, AMD GPUs for OpenCL) remain valid in our revised models. Using the HERD memory model simulator, we show that our overhaul leads to an exponential improvement in simulation time for C11 litmus tests compared with the original model, making exhaustive simulation competitive, time-wise, with the non-exhaustive CDSChecker tool
    corecore