530 research outputs found
Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays
RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array
X-code: MDS array codes with optimal encoding
We present a new class of MDS (maximum distance separable) array codes of size n×n (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns
Optimal Rebuilding of Multiple Erasures in MDS Codes
MDS array codes are widely used in storage systems due to their
computationally efficient encoding and decoding procedures. An MDS code with
redundancy nodes can correct any node erasures by accessing all the
remaining information in the surviving nodes. However, in practice,
erasures is a more likely failure event, for . Hence, a natural
question is how much information do we need to access in order to rebuild
storage nodes? We define the rebuilding ratio as the fraction of remaining
information accessed during the rebuilding of erasures. In our previous
work we constructed MDS codes, called zigzag codes, that achieve the optimal
rebuilding ratio of for the rebuilding of any systematic node when ,
however, all the information needs to be accessed for the rebuilding of the
parity node erasure.
The (normalized) repair bandwidth is defined as the fraction of information
transmitted from the remaining nodes during the rebuilding process. For codes
that are not necessarily MDS, Dimakis et al. proposed the regenerating codes
framework where any erasures can be corrected by accessing some of the
remaining information, and any erasure can be rebuilt from some subsets
of surviving nodes with optimal repair bandwidth.
In this work, we study 3 questions on rebuilding of codes: (i) We show a
fundamental trade-off between the storage size of the node and the repair
bandwidth similar to the regenerating codes framework, and show that zigzag
codes achieve the optimal rebuilding ratio of for MDS codes, for any
. (ii) We construct systematic codes that achieve optimal
rebuilding ratio of , for any systematic or parity node erasure. (iii) We
present error correction algorithms for zigzag codes, and in particular
demonstrate how these codes can be corrected beyond their minimum Hamming
distances.Comment: There is an overlap of this work with our two previous submissions:
Zigzag Codes: MDS Array Codes with Optimal Rebuilding; On Codes for Optimal
Rebuilding Access. arXiv admin note: text overlap with arXiv:1112.037
Density Evolution for Deterministic Generalized Product Codes with Higher-Order Modulation
Generalized product codes (GPCs) are extensions of product codes (PCs) where
coded bits are protected by two component codes but not necessarily arranged in
a rectangular array. It has recently been shown that there exists a large class
of deterministic GPCs (including, e.g., irregular PCs, half-product codes,
staircase codes, and certain braided codes) for which the asymptotic
performance under iterative bounded-distance decoding over the binary erasure
channel (BEC) can be rigorously characterized in terms of a density evolution
analysis. In this paper, the analysis is extended to the case where
transmission takes place over parallel BECs with different erasure
probabilities. We use this model to predict the code performance in a coded
modulation setup with higher-order signal constellations. We also discuss the
design of the bit mapper that determines the allocation of the coded bits to
the modulation bits of the signal constellation.Comment: invited and accepted paper for the special session "Recent Advances
in Coding for Higher Order Modulation" at the International Symposium on
Turbo Codes & Iterative Information Processing, Brest, France, 201
Cooperative Local Repair in Distributed Storage
Erasure-correcting codes, that support local repair of codeword symbols, have
attracted substantial attention recently for their application in distributed
storage systems. This paper investigates a generalization of the usual locally
repairable codes. In particular, this paper studies a class of codes with the
following property: any small set of codeword symbols can be reconstructed
(repaired) from a small number of other symbols. This is referred to as
cooperative local repair. The main contribution of this paper is bounds on the
trade-off of the minimum distance and the dimension of such codes, as well as
explicit constructions of families of codes that enable cooperative local
repair. Some other results regarding cooperative local repair are also
presented, including an analysis for the well-known Hadamard/Simplex codes.Comment: Fixed some minor issues in Theorem 1, EURASIP Journal on Advances in
Signal Processing, December 201
- …