9 research outputs found

    Computationally Light "Multi-Speed" Atomic Memory

    Get PDF

    Memory Access Efficiency in Distributed Atomic Object Implementations

    Get PDF
    Distributed data services use redundancy to ensure data availability and survivability. Replication can be used to mask failures, however it introduces the problem of consistency because operations may access different object replicas, possibly containing obsolete values. Atomicity is a venerable notion of consistency, introduced in 1979 by Lamport. Atomicity is the most natural type of consistency because it provides an illusion of equivalence with the serial object type that software designers expect. We deal with the storage of atomic shared readable and writable data in distributed systems that are subject to perturbations in the underlying distributed platforms composed of computers and networks that interconnect them. The perturbations may include permanent crashes of individual computers, transient failures, and delays in the communication medium. The contents of each object are replicated across several replica servers and clients invoke read/write operations on the objects. A new approach that exploits server-to-server communication is introduced. We devise a solution for the Single-Writer, Multiple-Reader setting where operations do not necessarily need to involve complete round-trips between clients and servers, i.e., operations take “one-and-a-half-rounds”. We extend the SWMR solution to yield an algorithm for the Multiple-Writer, Multiple-Reader setting. Next, we investigate implementations that reduce both communication and computation demands and we devise two SWMR algorithms. Lastly, we devise algorithms for both SWMR and MWMR settings where reads can take at most one-and-a-half-rounds, in a system with unconstrained quorum construction and reader participation. Algorithms have provable performance and correctness guarantees. Empirical studies are performed on the proposed algorithms

    Consistent Distributed Memory Services: Resilience and Efficiency (Invited Paper)

    No full text
    Reading, \u27Riting, and \u27Rithmetic, the three R\u27s underlying much of human intellectual activity, not surprisingly, also stand as a venerable foundation of modern computing technology. Indeed, both the Turing machine and von Neumann machine models operate by reading, writing, and computing, and all practical uniprocessor implementations are based on performing activities structured in terms of the three R\u27s. With the advance of networking technology, communication became an additional major systemic activity. However, at a high level of abstraction, it is apparently still more natural to think in terms of reading, writing, and computing. While it is hard to imagine distributed systems - such as those implementing the World-Wide Web - without communication, we often imagine browser-based applications that operate by retrieving (i.e., reading) data, performing computation, and storing (i.e., writing) the results. In this article, we deal with the storage of shared readable and writable data in distributed systems that are subject to perturbations in the underlying distributed platforms composed of computers and networks that interconnect them. The perturbations may include permanent failures (or crashes) of individual computers, transient failures, and delays in the communication medium. The focus of this paper is on the implementations of distributed atomic memory services. Atomicity is a venerable notion of consistency, introduced in 1979 by Lamport [Lamport, 1979]. To this day atomicity remains the most natural type of consistency because it provides an illusion of equivalence with the serial object type that software designers expect. We define the overall setting, models of computation, definition of atomic consistency, and measures of efficiency. We then present algorithms for single-writer settings in the static models. Then we move to presenting algorithms for multi-writer settings. For both static settings we discuss design issues, correctness, efficiency, and trade-offs. Lastly we survey the implementation issues in dynamic settings, where the universe of participants may completely change over time. Here the expectation is that solutions are found by integrating static algorithms with a reconfiguration framework so that during periods of relative stability one benefits from the efficiency of static algorithms, and where during the more turbulent times performance degrades gracefully when reconfigurations are needed. We describe the most important approaches and provide examples

    Coordinated cooperative task computing using crash-prone processors with unreliable multicast

    No full text
    This paper presents a new message-passing algorithm, called Do-UM, for distributed cooperative task computing in synchronous settings where processors may crash, and where any multicasts (or broadcasts) performed by crashing processors are unreliable. We specify the algorithm, prove its correctness and analyse its complexity. We show that its worst case available processor steps is S=Î\u98t+n [Formula presented] +f(nâ\u88\u92f) and that the number of messages sent is less than n2t+ [Formula presented], where n is the number of processors, t is the number of tasks to be executed and f is the number of failures. To assess the performance of the algorithm in practical scenarios, we perform an experimental evaluation on a planetary-scale distributed platform. This also allows us to compare our algorithm with the currently best algorithm that is, however, explicitly designed to use reliable multicast; the results suggest that our algorithm does not lose much efficiency in order to cope with unreliable multicast
    corecore