4 research outputs found

    Partial Aborts for Transactions via First Class Continuations

    Get PDF
    Software transactional memory (STM) has proven to be a useful abstraction for developing concurrent applications where programmers denote transactions with an atomic construct that delimits a collection of reads and writes to shared mutable references. The runtime system then guarantees that all transactions are observed to execute atomically with respect to each other. Traditionally, when the runtime system detects that one transaction conflicts with another, it aborts one of the transactions and restarts its execution from the beginning. This can lead to problems with both execution time and throughput. This thesis presents a novel approach that uses first-class continuations to restart a conflicting transaction at the point of a conflict, avoiding the re-execution of any work from the beginning of the transaction that has not been compromised. In practice, this allows transactions to complete more quickly, decreasing execution time and increasing throughput. The ideas presented in this thesis have been implemented in the context of the Manticore project, an ML-family language with support for parallelism and concurrency. Crucially, this work relies on constant-time continuation capturing via a continuation-passing-style (CPS) transformation and heap-allocated continuations. The partial abort scheme has been implemented as a part of three modern STM implementations: TL2, TinySTM, and NOrec. Experimental results show that, while no base STM implementation is universally best, each partial-abort implementation compares favorably to its full-abort counterpart. In addition to an implementation, this thesis presents a formal semantics for partial aborts. A proof of correctness is given by relating the partial-abort semantics to an analogous full-abort semantics via a simulation. All proofs have been formally verified using the Coq Theorem Prover

    Revealing Behaviours of Concurrent Functional Programs by Systematic Testing

    Get PDF
    We aim to make it easier for programmers to write correct concurrent programs and to demonstrate that concurrency testing techniques, typically described in the context of simple core languages, can be successfully applied to languages with more complex concurrency. In pursuit of these goals, we develop three lines of work: Testing concurrent Haskell We develop a library for testing concurrent Haskell programs using a typeclass abstraction of concurrency, which we give a formal semantics. Our tool implements systematic concurrency testing, a family of techniques for deterministically testing concurrent programs. Along the way we also tackle how to soundly handle daemon threads, and how to usefully present complex execution traces to a user. We not only obtain a useful tool for Haskell programs, but we also show that these techniques work well in languages with rich concurrency abstractions. Randomised concurrency testing We propose a new algorithm for randomly testing concurrent programs. This approach is fundamentally incomplete, but can be suitable in cases where systematic concurrency testing is not. We show that our algorithm performs as well as a pre-existing popular algorithm for a standard set of benchmarks. This pre-existing algorithm requires the use of program-specific parameters, but our algorithm does not. We argue that this makes use and implementation of our algorithm simpler. Finding properties of programs We develop a tool for finding properties of sets of concurrency functions operating on some shared state, such as the API for a concurrent data type. Our tool enumerates Haskell expressions and discovers properties by comparing execution results for a variety of inputs. Unlike other property discovery tools, we support side effects. We do so by building on our tool for testing concurrent Haskell programs. We argue that this approach can lead to greater understanding of concurrency functions
    corecore