74,952 research outputs found

    ARES: Adaptive, Reconfigurable, Erasure coded, atomic Storage

    Full text link
    Atomicity or strong consistency is one of the fundamental, most intuitive, and hardest to provide primitives in distributed shared memory emulations. To ensure survivability, scalability, and availability of a storage service in the presence of failures, traditional approaches for atomic memory emulation, in message passing environments, replicate the objects across multiple servers. Compared to replication based algorithms, erasure code-based atomic memory algorithms has much lower storage and communication costs, but usually, they are harder to design. The difficulty of designing atomic memory algorithms further grows, when the set of servers may be changed to ensure survivability of the service over software and hardware upgrades, while avoiding service interruptions. Atomic memory algorithms for performing server reconfiguration, in the replicated systems, are very few, complex, and are still part of an active area of research; reconfigurations of erasure-code based algorithms are non-existent. In this work, we present ARES, an algorithmic framework that allows reconfiguration of the underlying servers, and is particularly suitable for erasure-code based algorithms emulating atomic objects. ARES introduces new configurations while keeping the service available. To use with ARES we also propose a new, and to our knowledge, the first two-round erasure code based algorithm TREAS, for emulating multi-writer, multi-reader (MWMR) atomic objects in asynchronous, message-passing environments, with near-optimal communication and storage costs. Our algorithms can tolerate crash failures of any client and some fraction of servers, and yet, guarantee safety and liveness property. Moreover, by bringing together the advantages of ARES and TREAS, we propose an optimized algorithm where new configurations can be installed without the objects values passing through the reconfiguration clients

    A Unified Model for Shared-Memory and Message-Passing Systems

    Get PDF
    A unified model of distributed systems that accomodates both shared-memory and message-passing communication is proposed. An extension of the I/O automaton model of Lynch and Tuttle, the model provides a full range of types of atomic accesses to shared memory, from basic reads and writes to read-modify-write. In addition to supporting the specification and verification of shared memory algorithms, the unified model is particularly helpful for proving correspondences between atomic shared objects and invocation-response systems and for proving the correctness of systems that contain both message passing and shared memory (such as a network of shared-memory multiprocessors or a distributed memory multiprocessor with multi-threaded nodes). As an illustration of the model, we consider distributed systems in which the shared objects have the linearizability property proposed by Herlihy and Wing. We use the model to construct a careful proof that invocation-response systems constructed from linearizable objects simulate atomic shared memory systems. In addition, we extend the work of Herlihy and Wing by treating not only safety properties of invocation-response systems, but also liveness properties

    A multiarchitecture parallel-processing development environment

    Get PDF
    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems

    A Bridge Between the Asynchronous Message Passing Model and Local Computations in Graphs

    No full text
    International audienceA distributed system is a collection of processes that can interact. Three major process interaction models in distributed systems have principally been considered: - the message passing model, - the shared memory model, - the local computation model. In each model the processes are represented by vertices of a graph and the interactions are represented by edges. In the message passing model and the shared memory model, processes interact by communication primitives: messages can be sent along edges or atomic read/write operations can be performed on registers associated with edges. In the local computation model interactions are defined by labelled graph rewriting rules; supports of rules are edges or stars. These models (and their sub-models) reflect different system architectures, different levels of synchronization and different levels of abstraction. Understanding the power of various models, the role of structural network properties and the role of the initial knowledge enhances our understanding of basic distributed algorithms. This is done with some typical problems in distributed computing: election, naming, spanning tree construction, termination detection, network topology recognition, consensus, mutual exclusion. Furthermore, solutions to these problems constitute primitive building blocks for many other distributed algorithms. A survey may be found in [FR03], this survey presents some links with several parameters of the models including synchrony, communication media and randomization. An important goal in the study of these models is to understand some relationships between them. This paper is a contribution to this goal; more precisely we establish a bridge between tools and results presented in [YK96] for the message passing model and tools and results presented in [Ang80, BCG+96, Maz97, CM04, CMZ04, Cha05] for the local computation mode
    • …
    corecore