2,441 research outputs found
The Road From Classical to Quantum Codes: A Hashing Bound Approaching Design Procedure
Powerful Quantum Error Correction Codes (QECCs) are required for stabilizing
and protecting fragile qubits against the undesirable effects of quantum
decoherence. Similar to classical codes, hashing bound approaching QECCs may be
designed by exploiting a concatenated code structure, which invokes iterative
decoding. Therefore, in this paper we provide an extensive step-by-step
tutorial for designing EXtrinsic Information Transfer (EXIT) chart aided
concatenated quantum codes based on the underlying quantum-to-classical
isomorphism. These design lessons are then exemplified in the context of our
proposed Quantum Irregular Convolutional Code (QIRCC), which constitutes the
outer component of a concatenated quantum code. The proposed QIRCC can be
dynamically adapted to match any given inner code using EXIT charts, hence
achieving a performance close to the hashing bound. It is demonstrated that our
QIRCC-based optimized design is capable of operating within 0.4 dB of the noise
limit
On the Construction and Decoding of Concatenated Polar Codes
A scheme for concatenating the recently invented polar codes with interleaved
block codes is considered. By concatenating binary polar codes with interleaved
Reed-Solomon codes, we prove that the proposed concatenation scheme captures
the capacity-achieving property of polar codes, while having a significantly
better error-decay rate. We show that for any , and total frame
length , the parameters of the scheme can be set such that the frame error
probability is less than , while the scheme is still
capacity achieving. This improves upon 2^{-N^{0.5-\eps}}, the frame error
probability of Arikan's polar codes. We also propose decoding algorithms for
concatenated polar codes, which significantly improve the error-rate
performance at finite block lengths while preserving the low decoding
complexity
Entanglement-assisted quantum turbo codes
An unexpected breakdown in the existing theory of quantum serial turbo coding
is that a quantum convolutional encoder cannot simultaneously be recursive and
non-catastrophic. These properties are essential for quantum turbo code
families to have a minimum distance growing with blocklength and for their
iterative decoding algorithm to converge, respectively. Here, we show that the
entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in
the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can
possess both of the aforementioned desirable properties. We give several
examples of EAQ convolutional encoders that are both recursive and
non-catastrophic and detail their relevant parameters. We then modify the
quantum turbo decoding algorithm of Poulin et al., in order to have the
constituent decoders pass along only "extrinsic information" to each other
rather than a posteriori probabilities as in the decoder of Poulin et al., and
this leads to a significant improvement in the performance of unassisted
quantum turbo codes. Other simulation results indicate that
entanglement-assisted turbo codes can operate reliably in a noise regime 4.73
dB beyond that of standard quantum turbo codes, when used on a memoryless
depolarizing channel. Furthermore, several of our quantum turbo codes are
within 1 dB or less of their hashing limits, so that the performance of quantum
turbo codes is now on par with that of classical turbo codes. Finally, we prove
that entanglement is the resource that enables a convolutional encoder to be
both non-catastrophic and recursive because an encoder acting on only
information qubits, classical bits, gauge qubits, and ancilla qubits cannot
simultaneously satisfy them.Comment: 31 pages, software for simulating EA turbo codes is available at
http://code.google.com/p/ea-turbo/ and a presentation is available at
http://markwilde.com/publications/10-10-EA-Turbo.ppt ; v2, revisions based on
feedback from journal; v3, modification of the quantum turbo decoding
algorithm that leads to improved performance over results in v2 and the
results of Poulin et al. in arXiv:0712.288
Achievable Information Rates for Coded Modulation with Hard Decision Decoding for Coherent Fiber-Optic Systems
We analyze the achievable information rates (AIRs) for coded modulation
schemes with QAM constellations with both bit-wise and symbol-wise decoders,
corresponding to the case where a binary code is used in combination with a
higher-order modulation using the bit-interleaved coded modulation (BICM)
paradigm and to the case where a nonbinary code over a field matched to the
constellation size is used, respectively. In particular, we consider hard
decision decoding, which is the preferable option for fiber-optic communication
systems where decoding complexity is a concern. Recently, Liga \emph{et al.}
analyzed the AIRs for bit-wise and symbol-wise decoders considering what the
authors called \emph{hard decision decoder} which, however, exploits \emph{soft
information} of the transition probabilities of discrete-input discrete-output
channel resulting from the hard detection. As such, the complexity of the
decoder is essentially the same as the complexity of a soft decision decoder.
In this paper, we analyze instead the AIRs for the standard hard decision
decoder, commonly used in practice, where the decoding is based on the Hamming
distance metric. We show that if standard hard decision decoding is used,
bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As
a result, contrary to the conclusion by Liga \emph{et al.}, binary decoders
together with the BICM paradigm are preferable for spectrally-efficient
fiber-optic systems. We also design binary and nonbinary staircase codes and
show that, in agreement with the AIRs, binary codes yield better performance.Comment: Published in IEEE/OSA Journal of Lightwave Technology, 201
Iterative Soft Input Soft Output Decoding of Reed-Solomon Codes by Adapting the Parity Check Matrix
An iterative algorithm is presented for soft-input-soft-output (SISO)
decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the
sum product algorithm (SPA) in conjunction with a binary parity check matrix of
the RS code. The novelty is in reducing a submatrix of the binary parity check
matrix that corresponds to less reliable bits to a sparse nature before the SPA
is applied at each iteration. The proposed algorithm can be geometrically
interpreted as a two-stage gradient descent with an adaptive potential
function. This adaptive procedure is crucial to the convergence behavior of the
gradient descent algorithm and, therefore, significantly improves the
performance. Simulation results show that the proposed decoding algorithm and
its variations provide significant gain over hard decision decoding (HDD) and
compare favorably with other popular soft decision decoding methods.Comment: 10 pages, 10 figures, final version accepted by IEEE Trans. on
Information Theor
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turbo Codes
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels
- …