77 research outputs found

    Using simulation techniques to prove timing properties

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 151-160).by Victor Luchangco.M.S

    Extending Transactional Memory with Atomic Deferral

    Get PDF
    This paper introduces atomic deferral, an extension to TM that allows programmers to move long-running or irrevocable operations out of a transaction while maintaining serializability: the transaction and its de- ferred operation appear to execute atomically from the perspective of other transactions. Thus, program- mers can adapt lock-based programs to exploit TM with relatively little effort and without sacrificing scalability by atomically deferring the problematic operations. We demonstrate this with several use cases for atomic deferral, as well as an in-depth analysis of its use on the PARSEC dedup benchmark, where we show that atomic deferral enables TM to be competitive with well-designed lock-based code

    Accelerating Native Calls using Transactional Memory

    Get PDF
    Abstract Transitioning between executing managed code and executing native code requires synchronization to ensure that invariants used by the managed runtime are maintained or restored after the execution of the native code. We describe how transactional memory can be used to accelerate the execution of native methods by reducing the need for such synchronization. We also present the results of a simple experiment that, although preliminary, suggests that this approach may have a significant effect on performance. Unlike most of the work on exploiting transactional memory, this approach does not depend on concurrency and improving scalability. Indeed, the experiment presented uses a singlethreaded benchmark

    Nonblocking k-compare-single-swap

    Get PDF

    Theory Meets Practice in the Algorand Blockchain (Invited Talk)

    No full text
    Robust and effective distributed systems require good theory and good engineering, not separately but in concert: user requirements and system constraints are not merely implementation details but often must inform the design of algorithms for such systems. Blockchains are an excellent example. The heart of a blockchain is its (Byzantine) consensus protocol, and consensus protocols have been extensively studied in the theory community for decades. But traditional consensus protocols are not directly applicable to blockchains, which have, or hope to have, millions of participants. Furthermore, public blockchains, which allow anyone to participate, must have some mechanism to guarantee the security of the protocol, and traditional fault models do not adequately capture the assumptions of such mechanisms. In this talk, I will discuss these and other ways in which theory and practice meet in the context of the Algorand blockchain, and how Algorand is able to achieve high transaction throughput with low latency

    Memory consistency models for high performance distributed computing

    No full text
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.Includes bibliographical references (p. 195-205) and index.This thesis develops a mathematical framework for specifying the consistency guarantees of high performance distributed shared memory multiprocessors. This framework is based on computations, which specify the operations requested and constraints on how these operations may be applied; we call the framework computation-centric. This framework is expressive enough to specify high level synchronization mechanisms such as locks. We use the computation-centric framework to specify and compare several memory models, to characterize programming disciplines, and to prove that weakly consistent systems provide strong consistency guarantees when certain programming disciplines are obeyed. Specifically, we define computation-centric versions of several memory models from the literature, including sequential consistency, weak ordering and release consistency, and we give a computation-centric characterization of data-race-free programs. We prove that when running data-race-free programs, weakly ordered systems appear sequentially consistent. We also define memory models that have higher level guarantees such as locks and transactions.(cont.) The strongly consistent versions of these models make guarantees that are stronger than sequential consistency, and thus are easier for programmers to use. We introduce a new model called weak sequential locking, which has very weak guarantees, and prove that it guarantees sequential consistency and mutually exclusive locking for programs that protect memory accesses using locks. We also show that by using two-phase locking, programmers can implement serializable transactions on any memory system with weak sequential locking. The framework is intended primarily to help programmers of such systems reason about their programs. It supports a high level of abstraction, insulating programmers from system details and enhancing the portability of their programs. The framework is also useful for implementors of such systems, in determining what guarantees their implementations provide and in assessing the advantages of providing one memory model rather than another.by Victor Luchangco.Sc.D

    Computation-Centric Memory Models

    No full text
    We present a computation-centric theory of memory models. Unlike traditional processor-centric models, computation-centric models focus on the logical dependencies among instructions rather than the processor that happens to execute them. This theory allows us to define what a memory model is, and to investigate abstract properties of memory models. In particular, we focus on constructibility, which is a necessary property of those models that can be implemented exactly by an online algorithm. For a nonconstructible model, we show that there is a natural way to define the constructible version of that model. We explore the implications of constructibility in the context of dag-consistent memory models, which do not require that memory locations be serialized. The strongest dag-consistent model, called NN-dag consistency, is not constructible. However, its constructible version is equivalent to a model that we call location consistency, in which each location is serialized independentl..

    ABSTRACT

    No full text
    Many lock-free data structures in the literature exploit techniques that are possible only because state-of-the-art 64-bit processors are still running 32-bit operating systems and applications. As software catches up to hardware, “64-bitclean” lock-free data structures, which cannot use such techniques, are needed. We present several 64-bit-clean lock-free implementations: load-linked/store-conditional variables of arbitrary size, a FIFO queue, and a freelist. In addition to being portable to 64-bit software, our implementations also improve on previous ones in that they are space-adaptive and do not require knowledge of the number of threads that will access them
    • …
    corecore