4,004 research outputs found

    A Coded Shared Atomic Memory Algorithm for Message Passing Architectures

    Get PDF
    This paper considers the communication and storage costs of emulating atomic (linearizable) multi-writer multi-reader shared memory in distributed message-passing systems. The paper contains three main contributions: (1) We present a atomic shared-memory emulation algorithm that we call Coded Atomic Storage (CAS). This algorithm uses erasure coding methods. In a storage system with 'N' servers that is resilient to 'f' server failures, we show that the communication cost of CAS is N/(N-2f) . The storage cost of CAS is unbounded. (2) We present a modification of the CAS algorithm known as CAS with Garbage Collection (CASGC). The CASGC algorithm is parametrized by an integer 'd' and has a bounded storage cost. We show that in every execution where the number of write operations that are concurrent with a read operation is no bigger than 'd', the CASGC algorithm with parameter 'd' satisfies atomicity and liveness. We explicitly characterize the storage cost of CASGC, and show that it has the same communication cost as CASGC. (3) We describe an algorithm known as the Communication Cost Optimal Atomic Storage (CCOAS) algorithm that achieves a smaller communication cost than CAS and CASGC. In particular, CCOAS incurs read and write communication costs of N/(N-f) measured in terms of number of object values. We also discuss drawbacks of CCOAS as compared with CAS and CASGC

    Fast Lean Erasure-Coded Atomic Memory Object

    Get PDF
    In this work, we propose FLECKS, an algorithm which implements atomic memory objects in a multi-writer multi-reader (MWMR) setting in asynchronous networks and server failures. FLECKS substantially reduces storage and communication costs over its replication-based counterparts by employing erasure-codes. FLECKS outperforms the previously proposed algorithms in terms of the metrics that to deliver good performance such as storage cost per object, communication cost a high fault-tolerance of clients and servers, guaranteed liveness of operation, and a given number of communication rounds per operation, etc. We provide proofs for liveness and atomicity properties of FLECKS and derive worst-case latency bounds for the operations. We implemented and deployed FLECKS in cloud-based clusters and demonstrate that FLECKS has substantially lower storage and bandwidth costs, and significantly lower latency of operations than the replication-based mechanisms

    Coded Emulation of Shared Atomic Memory for Message Passing Architectures

    Get PDF
    This paper considers the communication and storage costs of emulating atomic (linearizable) read/write shared memory in distributed message-passing systems. We analyze the costs of previously-proposed algorithms by Attiya, Bar-Noy, and Dolev (the ABD algorithm) and by Fan and Lynch (the LDR algorithm), and develop new coding-based algorithms that significantly reduce these costs. The paper contains three main contributions: (1) We present a new shared-memory algorithm that we call CAS, for Coded Atomic Storage. This algorithm uses erasure coding methods. (2) In a storage system with N servers that is resilient to f server failures, we show that the communication costs for the ABD and LDR algorithms, measured in terms of number of object values, are both at least f + 1, whereas the communication cost for CAS is N/(N-2f). (3) We also explicitly quantify the storage costs of the ABD, LDR, and CAS algorithms. The storage cost of the ABD algorithm, measured in terms of number of object values, is N; whereas the storage costs of the LDR and CAS algorithms are both unbounded. We present a modification of the CAS algorithm based on the idea of garbage collection. The modified version of CAS has a storage cost of (d + 1) N/(N-2f), where d in an upper bound on the number of operations that are concurrent with a read operation. Thus, if d is sufficiently small, the storage cost of CAS is lower than those of both the ABD and LDR algorithms

    A compiler approach to scalable concurrent program design

    Get PDF
    The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support. The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics

    ARES: Adaptive, Reconfigurable, Erasure coded, atomic Storage

    Full text link
    Atomicity or strong consistency is one of the fundamental, most intuitive, and hardest to provide primitives in distributed shared memory emulations. To ensure survivability, scalability, and availability of a storage service in the presence of failures, traditional approaches for atomic memory emulation, in message passing environments, replicate the objects across multiple servers. Compared to replication based algorithms, erasure code-based atomic memory algorithms has much lower storage and communication costs, but usually, they are harder to design. The difficulty of designing atomic memory algorithms further grows, when the set of servers may be changed to ensure survivability of the service over software and hardware upgrades, while avoiding service interruptions. Atomic memory algorithms for performing server reconfiguration, in the replicated systems, are very few, complex, and are still part of an active area of research; reconfigurations of erasure-code based algorithms are non-existent. In this work, we present ARES, an algorithmic framework that allows reconfiguration of the underlying servers, and is particularly suitable for erasure-code based algorithms emulating atomic objects. ARES introduces new configurations while keeping the service available. To use with ARES we also propose a new, and to our knowledge, the first two-round erasure code based algorithm TREAS, for emulating multi-writer, multi-reader (MWMR) atomic objects in asynchronous, message-passing environments, with near-optimal communication and storage costs. Our algorithms can tolerate crash failures of any client and some fraction of servers, and yet, guarantee safety and liveness property. Moreover, by bringing together the advantages of ARES and TREAS, we propose an optimized algorithm where new configurations can be installed without the objects values passing through the reconfiguration clients

    The role of concurrency in an evolutionary view of programming abstractions

    Full text link
    In this paper we examine how concurrency has been embodied in mainstream programming languages. In particular, we rely on the evolutionary talking borrowed from biology to discuss major historical landmarks and crucial concepts that shaped the development of programming languages. We examine the general development process, occasionally deepening into some language, trying to uncover evolutionary lineages related to specific programming traits. We mainly focus on concurrency, discussing the different abstraction levels involved in present-day concurrent programming and emphasizing the fact that they correspond to different levels of explanation. We then comment on the role of theoretical research on the quest for suitable programming abstractions, recalling the importance of changing the working framework and the way of looking every so often. This paper is not meant to be a survey of modern mainstream programming languages: it would be very incomplete in that sense. It aims instead at pointing out a number of remarks and connect them under an evolutionary perspective, in order to grasp a unifying, but not simplistic, view of the programming languages development process
    • …
    corecore