64 research outputs found

    CORE: Augmenting Regenerating-Coding-Based Recovery for Single and Concurrent Failures in Distributed Storage Systems

    Full text link
    Data availability is critical in distributed storage systems, especially when node failures are prevalent in real life. A key requirement is to minimize the amount of data transferred among nodes when recovering the lost or unavailable data of failed nodes. This paper explores recovery solutions based on regenerating codes, which are shown to provide fault-tolerant storage and minimum recovery bandwidth. Existing optimal regenerating codes are designed for single node failures. We build a system called CORE, which augments existing optimal regenerating codes to support a general number of failures including single and concurrent failures. We theoretically show that CORE achieves the minimum possible recovery bandwidth for most cases. We implement CORE and evaluate our prototype atop a Hadoop HDFS cluster testbed with up to 20 storage nodes. We demonstrate that our CORE prototype conforms to our theoretical findings and achieves recovery bandwidth saving when compared to the conventional recovery approach based on erasure codes.Comment: 25 page

    Evaluating Erasure Codes in Dicoogle PACS

    Get PDF
    DICOM (Digital Imaging and Communication in Medicine) is a standard for image and data transmission in medical purpose hardware and is commonly used for viewing, storing, printing and transmitting images. As a part of the way that DICOM transmits files, the PACS (Picture Archiving and Communication System) platform, Dicoogle, has become one of the most in-demand image processing and viewing platforms. However, the Dicoogle PACS architecture does not guarantee image information recovery in the case of information loss. Therefore, this paper proposes a file recovery solution in the Dicoogle architecture. The proposal consists of maximizing the encoding and decoding performance of medical images through computational parallelism. To validate the proposal, the Java programming language based on the Reed-Solomon algorithm is implemented in different performance tests. The experimental results show that the proposal is optimal in terms of image processing time for the Dicoogle PACS storage system.Ministry of Science, Innovation and Universities (MICINN) of Spain PGC2018 098883-B-C44European CommissionPrograma para el Desarrollo Profesional Docente para el Tipo Superior (PRODEP) of MexicoCorporacion Ecuatoriana para el Desarrollo de la Investigacion y la Academia (CEDIA) of Ecuador CEPRA XII-2018-13Universidad de Las Americas (UDLA), Quito, Ecuador IEA.WHP.21.0

    Exploration of Erasure-Coded Storage Systems for High Performance, Reliability, and Inter-operability

    Get PDF
    With the unprecedented growth of data and the use of low commodity drives in local disk-based storage systems and remote cloud-based servers has increased the risk of data loss and an overall increase in the user perceived system latency. To guarantee high reliability, replication has been the most popular choice for decades, because of simplicity in data management. With the high volume of data being generated every day, the storage cost of replication is very high and is no longer a viable approach. Erasure coding is another approach of adding redundancy in storage systems, which provides high reliability at a fraction of the cost of replication. However, the choice of erasure codes being used affects the storage efficiency, reliability, and overall system performance. At the same time, the performance and interoperability are adversely affected by the slower device components and complex central management systems and operations. To address the problems encountered in various layers of the erasure coded storage system, in this dissertation, we explore the different aspects of storage and design several techniques to improve the reliability, performance, and interoperability. These techniques range from the comprehensive evaluation of erasure codes, application of erasure codes for highly reliable and high-performance SSD system, to the design of new erasure coding and caching schemes for Hadoop Distributed File System, which is one of the central management systems for distributed storage. Detailed evaluation and results are also provided in this dissertation

    Enabling Distributed Applications Optimization in Cloud Environment

    Get PDF
    The past few years have seen dramatic growth in the popularity of public clouds, such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Container-as-a-Service (CaaS). In both commercial and scientific fields, quick environment setup and application deployment become a mandatory requirement. As a result, more and more organizations choose cloud environments instead of setting up the environment by themselves from scratch. The cloud computing resources such as server engines, orchestration, and the underlying server resources are served to the users as a service from a cloud provider. Most of the applications that run in public clouds are the distributed applications, also called multi-tier applications, which require a set of servers, a service ensemble, that cooperate and communicate to jointly provide a certain service or accomplish a task. Moreover, a few research efforts are conducting in providing an overall solution for distributed applications optimization in the public cloud. In this dissertation, we present three systems that enable distributed applications optimization: (1) the first part introduces DocMan, a toolset for detecting containerized application’s dependencies in CaaS clouds, (2) the second part introduces a system to deal with hot/cold blocks in distributed applications, (3) the third part introduces a system named FP4S, a novel fragment-based parallel state recovery mechanism that can handle many simultaneous failures for a large number of concurrently running stream applications

    A Novel Completely Local Repairable Code Algorithm Based on Erasure Code

    Get PDF
    Hadoop Distributed File System (HDFS) is widely used in massive data storage. Because of the disadvantage of the multi-copy strategy, the hardware expansion of HDFS cannot keep up with the continuous volume of big data. Now, the traditional data replication strategy has been gradually replaced by Erasure Code due to its smaller redundancy rate and storage overhead. However, compared with replicas, Erasure Code needs to read a certain amount of data blocks during the process of data recovery, resulting in a large amount of overhead for I/O and network. Based on the Reed-Solomon (RS) algorithm, we propose a novel Completely Local Repairable Code (CLRC) algorithm. By grouping RS coded blocks and generating local check blocks, CLRC algorithm can optimize the locality of the RS algorithm, which can reduce the cost of data recovery. Evaluations show that the CLRC algorithm can reduce the bandwidth and I/O consumption during the process of data recovery when a single block is damaged. What\u27s more, the cost of decoding time is only 59% of the RS algorithm

    Efficient data reliability management of cloud storage systems for big data applications

    Get PDF
    Cloud service providers are consistently striving to provide efficient and reliable service, to their client's Big Data storage need. Replication is a simple and flexible method to ensure reliability and availability of data. However, it is not an efficient solution for Big Data since it always scales in terabytes and petabytes. Hence erasure coding is gaining traction despite its shortcomings. Deploying erasure coding in cloud storage confronts several challenges like encoding/decoding complexity, load balancing, exponential resource consumption due to data repair and read latency. This thesis has addressed many challenges among them. Even though data durability and availability should not be compromised for any reason, client's requirements on read performance (access latency) may vary with the nature of data and its access pattern behaviour. Access latency is one of the important metrics and latency acceptance range can be recorded in the client's SLA. Several proactive recovery methods, for erasure codes are proposed in this research, to reduce resource consumption due to recovery. Also, a novel cache based solution is proposed to mitigate the access latency issue of erasure coding
    • …
    corecore