5,674 research outputs found
MDS Array Codes with Optimal Rebuilding
MDS array codes are widely used in storage systems
to protect data against erasures. We address the rebuilding ratio
problem, namely, in the case of erasures, what is the the fraction
of the remaining information that needs to be accessed in order
to rebuild exactly the lost information? It is clear that when the
number of erasures equals the maximum number of erasures
that an MDS code can correct then the rebuilding ratio is 1
(access all the remaining information). However, the interesting
(and more practical) case is when the number of erasures is
smaller than the erasure correcting capability of the code. For
example, consider an MDS code that can correct two erasures:
What is the smallest amount of information that one needs to
access in order to correct a single erasure? Previous work showed
that the rebuilding ratio is bounded between 1/2 and 3/4 , however,
the exact value was left as an open problem. In this paper, we
solve this open problem and prove that for the case of a single
erasure with a 2-erasure correcting code, the rebuilding ratio is
1/2 . In general, we construct a new family of r-erasure correcting
MDS array codes that has optimal rebuilding ratio of 1/r
in the
case of a single erasure. Our array codes have efficient encoding
and decoding algorithms (for the case r = 2 they use a finite field
of size 3) and an optimal update property
Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays
RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array
Zigzag Codes: MDS Array Codes with Optimal Rebuilding
MDS array codes are widely used in storage systems to protect data against
erasures. We address the \emph{rebuilding ratio} problem, namely, in the case
of erasures, what is the fraction of the remaining information that needs to be
accessed in order to rebuild \emph{exactly} the lost information? It is clear
that when the number of erasures equals the maximum number of erasures that an
MDS code can correct then the rebuilding ratio is 1 (access all the remaining
information). However, the interesting and more practical case is when the
number of erasures is smaller than the erasure correcting capability of the
code. For example, consider an MDS code that can correct two erasures: What is
the smallest amount of information that one needs to access in order to correct
a single erasure? Previous work showed that the rebuilding ratio is bounded
between 1/2 and 3/4, however, the exact value was left as an open problem. In
this paper, we solve this open problem and prove that for the case of a single
erasure with a 2-erasure correcting code, the rebuilding ratio is 1/2. In
general, we construct a new family of -erasure correcting MDS array codes
that has optimal rebuilding ratio of in the case of erasures,
. Our array codes have efficient encoding and decoding
algorithms (for the case they use a finite field of size 3) and an
optimal update property.Comment: 23 pages, 5 figures, submitted to IEEE transactions on information
theor
Quantum optical coherence can survive photon losses: a continuous-variable quantum erasure correcting code
A fundamental requirement for enabling fault-tolerant quantum information
processing is an efficient quantum error-correcting code (QECC) that robustly
protects the involved fragile quantum states from their environment. Just as
classical error-correcting codes are indispensible in today's information
technologies, it is believed that QECC will play a similarly crucial role in
tomorrow's quantum information systems. Here, we report on the first
experimental demonstration of a quantum erasure-correcting code that overcomes
the devastating effect of photon losses. Whereas {\it errors} translate, in an
information theoretic language, the noise affecting a transmission line, {\it
erasures} correspond to the in-line probabilistic loss of photons. Our quantum
code protects a four-mode entangled mesoscopic state of light against erasures,
and its associated encoding and decoding operations only require linear optics
and Gaussian resources. Since in-line attenuation is generally the strongest
limitation to quantum communication, much more than noise, such an
erasure-correcting code provides a new tool for establishing quantum optical
coherence over longer distances. We investigate two approaches for
circumventing in-line losses using this code, and demonstrate that both
approaches exhibit transmission fidelities beyond what is possible by classical
means.Comment: 5 pages, 4 figure
Quantum error correction via robust probe modes
We propose a new scheme for quantum error correction using robust continuous
variable probe modes, rather than fragile ancilla qubits, to detect errors
without destroying data qubits. The use of such probe modes reduces the
required number of expensive qubits in error correction and allows efficient
encoding, error detection and error correction. Moreover, the elimination of
the need for direct qubit interactions significantly simplifies the
construction of quantum circuits. We will illustrate how the approach
implements three existing quantum error correcting codes: the 3-qubit bit-flip
(phase-flip) code, the Shor code, and an erasure code.Comment: 5 pages, 3 figure
Low-density MDS codes and factors of complete graphs
We present a class of array code of size n×l, where l=2n or 2n+1, called B-Code. The distances of the B-Code and its dual are 3 and l-1, respectively. The B-Code and its dual are optimal in the sense that i) they are maximum-distance separable (MDS), ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal, and iii) they have optimal length. Using a new graph description of the codes, we prove an equivalence relation between the construction of the B-Code (or its dual) and a combinatorial problem known as perfect one-factorization of complete graphs, thus obtaining constructions of two families of the B-Code and its dual, one of which is new. Efficient decoding algorithms are also given, both for erasure correcting and for error correcting. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-Codes of arbitrary odd length will provide an affirmative answer to the conjecture
- …