852 research outputs found

    ARES: Adaptive, Reconfigurable, Erasure coded, atomic Storage

    Full text link
    Atomicity or strong consistency is one of the fundamental, most intuitive, and hardest to provide primitives in distributed shared memory emulations. To ensure survivability, scalability, and availability of a storage service in the presence of failures, traditional approaches for atomic memory emulation, in message passing environments, replicate the objects across multiple servers. Compared to replication based algorithms, erasure code-based atomic memory algorithms has much lower storage and communication costs, but usually, they are harder to design. The difficulty of designing atomic memory algorithms further grows, when the set of servers may be changed to ensure survivability of the service over software and hardware upgrades, while avoiding service interruptions. Atomic memory algorithms for performing server reconfiguration, in the replicated systems, are very few, complex, and are still part of an active area of research; reconfigurations of erasure-code based algorithms are non-existent. In this work, we present ARES, an algorithmic framework that allows reconfiguration of the underlying servers, and is particularly suitable for erasure-code based algorithms emulating atomic objects. ARES introduces new configurations while keeping the service available. To use with ARES we also propose a new, and to our knowledge, the first two-round erasure code based algorithm TREAS, for emulating multi-writer, multi-reader (MWMR) atomic objects in asynchronous, message-passing environments, with near-optimal communication and storage costs. Our algorithms can tolerate crash failures of any client and some fraction of servers, and yet, guarantee safety and liveness property. Moreover, by bringing together the advantages of ARES and TREAS, we propose an optimized algorithm where new configurations can be installed without the objects values passing through the reconfiguration clients

    Erasure Code Based Cloud Storage System

    Get PDF
    Cloud Computing is the technology that provides on demand services and resources like storage space, networks, programming language execution environment on the top of Internet pay per use model. Cloud computing is globalized concept and there are no borders within the Cloud. Because of attractive features of Cloud computing, many organizations are using Cloud storage for storing their critical information. The data can be stored remotely in the Cloud by user and can be accessed using thin clients as and when required. One of the major issue in Cloud today is data security. Storage of data in the Cloud can be risky because storage is done on Cloud service providers’ servers which mean less control over the stored data. One of the major concern in Cloud is how do we grab all the benefits of Cloud while maintaining security controls over the data. In this paper reliable storage system is proposed which can be robust in case of errors or erasures in data to be stored. Proposed system provides reliable storage while maintaining the integrity of the data. The files are split into parts to get an extra layer of securit

    Evaluation of cross-layer reliability mechanisms for satellite digital multimedia broadcast

    Get PDF
    This paper presents a study of some reliability mechanisms which may be put at work in the context of Satellite Digital Multimedia Broadcasting (SDMB) to mobile devices such as handheld phones. These mechanisms include error correcting codes, interleaving at the physical layer, erasure codes at intermediate layers and error concealment on the video decoder. The evaluation is made on a realistic satellite channel and takes into account practical constraints such as the maximum zapping time and the user mobility at several speeds. The evaluation is done by simulating different scenarii with complete protocol stacks. The simulations indicate that, under the assumptions taken here, the scenario using highly compressed video protected by erasure codes at intermediate layers seems to be the best solution on this kind of channel

    Cross-layer based erasure code to reduce the 802.11 performance anomaly : when FEC meets ARF

    Get PDF
    Wireless networks have been widely accepted and deployed in our world nowadays. Consumers are now accustomed to wireless connectivity in their daily life due to the pervasive- ness of the 802.11b/g and wireless LAN standards. Specially, the emergence of the next evolution of Wi-Fi technology known as 802.11n is pushing a new revolution on personal wireless communication. However, in the context of WLAN, although multiple novel wireless access technologies have been proposed and developed to offer high bandwidth and guarantee quality of transmission, some deficiencies still remain due to the original design of WLAN-MAC layer. In particular, the performance anomaly of 802.11 is a serious issue which induces a potentially dramatic reduction of the global bandwidth when one or several mobile nodes downgrade their transmission rates following the signal degradation. In this paper, we study how the use of adaptive erasure code as a replacement of the Auto Rate Feedback mechanism can help to mitigate this performance anomaly issue. Preliminary study shows a global increase of the goodput delivered to mobile hosts attached to an access point

    Decentralized Erasure Codes for Distributed Networked Storage

    Full text link
    We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.Comment: to appear in IEEE Transactions on Information Theory, Special Issue: Networking and Information Theor
    corecore