18 research outputs found

    New Centralized MSR Codes With Small Sub-packetization

    Full text link
    Centralized repair refers to repairing hβ‰₯2h\geq 2 node failures using dd helper nodes in a centralized way, where the repair bandwidth is counted by the total amount of data downloaded from the helper nodes. A centralized MSR code is an MDS array code with (h,d)(h,d)-optimal repair for some hh and dd. In this paper, we present several classes of centralized MSR codes with small sub-packetization. At first, we construct an alternative MSR code with (1,di)(1,d_i)-optimal repair for multiple repair degrees did_i simultaneously. Based on the code structure, we are able to construct a centralized MSR code with (hi,di)(h_i,d_i)-optimal repair property for all possible (hi,di)(h_i,d_i) with hi∣(diβˆ’k)h_i\mid (d_i-k) simultaneously. The sub-packetization is no more than lcm(1,2,…,nβˆ’k)(nβˆ’k)n{\rm lcm}(1,2,\ldots,n-k)(n-k)^n, which is much smaller than a previous work given by Ye and Barg ((lcm(1,2,…,nβˆ’k))n({\rm lcm}(1,2,\ldots,n-k))^n). Moreover, for general parameters 2≀h≀nβˆ’k2\leq h\leq n-k and k≀d≀nβˆ’hk\leq d\leq n-h, we further give a centralized MSR code enabling (h,d)(h,d)-optimal repair with sub-packetization smaller than all previous works

    On Epsilon-MSCR Codes for Two Erasures

    Full text link
    Cooperative regenerating codes are regenerating codes designed to tradeoff storage for repair bandwidth in case of multiple node failures. Minimum storage cooperative regenerating (MSCR) codes are a class of cooperative regenerating codes which achieve the minimum storage point of the tradeoff. Recently, these codes have been constructed for all possible parameters (n,k,d,h)(n,k,d,h), where hh erasures are repaired by contacting any dd surviving nodes. However, these constructions have very large sub-packetization. Ο΅\epsilon-MSR codes are a class of codes introduced to tradeoff subpacketization level for a slight increase in the repair bandwidth for the case of single node failures. We introduce the framework of Ο΅\epsilon-MSCR codes which allow for a similar tradeoff for the case of multiple node failures. We present a construction of Ο΅\epsilon-MSCR codes, which can recover from two node failures, by concatenating a class of MSCR codes and scalar linear codes. We give a repair procedure to repair the Ο΅\epsilon-MSCR codes in the event of two node failures and calculate the repair bandwidth for the same. We characterize the increase in repair bandwidth incurred by the method in comparison with the optimal repair bandwidth given by the cut-set bound. Finally, we show the subpacketization level of Ο΅\epsilon-MSCR codes scales logarithmically in the number of nodes.Comment: 14 pages, Keywords: Cooperative repair, MSCR Codes, Subpacketizatio

    Network Traffic Driven Storage Repair

    Full text link
    Recently we constructed an explicit family of locally repairable and locally regenerating codes. Their existence was proven by Kamath et al. but no explicit construction was given. Our design is based on HashTag codes that can have different sub-packetization levels. In this work we emphasize the importance of having two ways to repair a node: repair only with local parity nodes or repair with both local and global parity nodes. We say that the repair strategy is network traffic driven since it is in connection with the concrete system and code parameters: the repair bandwidth of the code, the number of I/O operations, the access time for the contacted parts and the size of the stored file. We show the benefits of having repair duality in one practical example implemented in Hadoop. We also give algorithms for efficient repair of the global parity nodes.Comment: arXiv admin note: text overlap with arXiv:1701.0666

    Convertible Codes: New Class of Codes for Efficient Conversion of Coded Data in Distributed Storage

    Get PDF
    Erasure codes are typically used in large-scale distributed storage systems to provide durability of data in the face of failures. In this setting, a set of k blocks to be stored is encoded using an [n, k] code to generate n blocks that are then stored on different storage nodes. A recent work by Kadekodi et al. [Kadekodi et al., 2019] shows that the failure rate of storage devices vary significantly over time, and that changing the rate of the code (via a change in the parameters n and k) in response to such variations provides significant reduction in storage space requirement. However, the resource overhead of realizing such a change in the code rate on already encoded data in traditional codes is prohibitively high. Motivated by this application, in this work we first present a new framework to formalize the notion of code conversion - the process of converting data encoded with an [n^I, k^I] code into data encoded with an [n^F, k^F] code while maintaining desired decodability properties, such as the maximum-distance-separable (MDS) property. We then introduce convertible codes, a new class of code pairs that allow for code conversions in a resource-efficient manner. For an important parameter regime (which we call the merge regime) along with the widely used linearity and MDS decodability constraint, we prove tight bounds on the number of nodes accessed during code conversion. In particular, our achievability result is an explicit construction of MDS convertible codes that are optimal for all parameter values in the merge regime albeit with a high field size. We then present explicit low-field-size constructions of optimal MDS convertible codes for a broad range of parameters in the merge regime. Our results thus show that it is indeed possible to achieve code conversions with significantly lesser resources as compared to the default approach of re-encoding
    corecore