555 research outputs found

    Fully-parallel quantum turbo decoder

    No full text
    Quantum Turbo Codes (QTCs) are known to operate close to the achievable Hashing bound. However, the sequential nature of the conventional quantum turbo decoding algorithm imposes a high decoding latency, which increases linearly with the frame length. This posses a potential threat to quantum systems having short coherence times. In this context, we conceive a Fully- Parallel Quantum Turbo Decoder (FPQTD), which eliminates the inherent time dependencies of the conventional decoder by executing all the associated processes concurrently. Due to its parallel nature, the proposed FPQTD reduces the decoding times by several orders of magnitude, while maintaining the same performance. We have also demonstrated the significance of employing an odd-even interleaver design in conjunction with the proposed FPQTD. More specifically, it is shown that an odd-even interleaver reduces the computational complexity by 50%, without compromising the achievable performance

    Decoding Schemes for Foliated Sparse Quantum Error Correcting Codes

    Get PDF
    Foliated quantum codes are a resource for fault-tolerant measurement-based quantum error correction for quantum repeaters and for quantum computation. They represent a general approach to integrating a range of possible quantum error correcting codes into larger fault-tolerant networks. Here we present an efficient heuristic decoding scheme for foliated quantum codes, based on message passing between primal and dual code 'sheets'. We test this decoder on two different families of sparse quantum error correcting code: turbo codes and bicycle codes, and show reasonably high numerical performance thresholds. We also present a construction schedule for building such code states.Comment: 23 pages, 15 figures, accepted for publication in Phys. Rev.

    Entanglement-assisted quantum turbo codes

    Get PDF
    An unexpected breakdown in the existing theory of quantum serial turbo coding is that a quantum convolutional encoder cannot simultaneously be recursive and non-catastrophic. These properties are essential for quantum turbo code families to have a minimum distance growing with blocklength and for their iterative decoding algorithm to converge, respectively. Here, we show that the entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can possess both of the aforementioned desirable properties. We give several examples of EAQ convolutional encoders that are both recursive and non-catastrophic and detail their relevant parameters. We then modify the quantum turbo decoding algorithm of Poulin et al., in order to have the constituent decoders pass along only "extrinsic information" to each other rather than a posteriori probabilities as in the decoder of Poulin et al., and this leads to a significant improvement in the performance of unassisted quantum turbo codes. Other simulation results indicate that entanglement-assisted turbo codes can operate reliably in a noise regime 4.73 dB beyond that of standard quantum turbo codes, when used on a memoryless depolarizing channel. Furthermore, several of our quantum turbo codes are within 1 dB or less of their hashing limits, so that the performance of quantum turbo codes is now on par with that of classical turbo codes. Finally, we prove that entanglement is the resource that enables a convolutional encoder to be both non-catastrophic and recursive because an encoder acting on only information qubits, classical bits, gauge qubits, and ancilla qubits cannot simultaneously satisfy them.Comment: 31 pages, software for simulating EA turbo codes is available at http://code.google.com/p/ea-turbo/ and a presentation is available at http://markwilde.com/publications/10-10-EA-Turbo.ppt ; v2, revisions based on feedback from journal; v3, modification of the quantum turbo decoding algorithm that leads to improved performance over results in v2 and the results of Poulin et al. in arXiv:0712.288

    Replacing the Soft FEC Limit Paradigm in the Design of Optical Communication Systems

    Get PDF
    The FEC limit paradigm is the prevalent practice for designing optical communication systems to attain a certain bit-error rate (BER) without forward error correction (FEC). This practice assumes that there is an FEC code that will reduce the BER after decoding to the desired level. In this paper, we challenge this practice and show that the concept of a channel-independent FEC limit is invalid for soft-decision bit-wise decoding. It is shown that for low code rates and high order modulation formats, the use of the soft FEC limit paradigm can underestimate the spectral efficiencies by up to 20%. A better predictor for the BER after decoding is the generalized mutual information, which is shown to give consistent post-FEC BER predictions across different channel conditions and modulation formats. Extensive optical full-field simulations and experiments are carried out in both the linear and nonlinear transmission regimes to confirm the theoretical analysis

    Fixed-complexity quantum-assisted multi-user detection for CDMA and SDMA

    No full text
    In a system supporting numerous users the complexity of the optimal Maximum Likelihood Multi-User Detector (ML MUD) becomes excessive. Based on the superimposed constellations of K users, the ML MUD outputs the specific multilevel K-user symbol that minimizes the Euclidean distance with respect to the faded and noise-contaminated received multi-level symbol. Explicitly, the Euclidean distance is considered as the Cost Function (CF). In a system supporting K users employing M-ary modulation, the ML MUD uses MK CF evaluations (CFE) per time slot. In this contribution we propose an Early Stopping-aided Durr-Høyer algorithm-based Quantum-assisted MUD (ES-DHA QMUD) based on two techniques for achieving optimal ML detection at a low complexity. Our solution is also capable of flexibly adjusting the QMUD's performance and complexity trade-off, depending on the computing power available at the base station. We conclude by proposing a general design methodology for the ES-DHA QMUD in the context of both CDMA and SDMA systems

    Graphical Structures for Design and Verification of Quantum Error Correction

    Get PDF
    We introduce a high-level graphical framework for designing and analysing quantum error correcting codes, centred on what we term the coherent parity check (CPC). The graphical formulation is based on the diagrammatic tools of the zx-calculus of quantum observables. The resulting framework leads to a construction for stabilizer codes that allows us to design and verify a broad range of quantum codes based on classical ones, and that gives a means of discovering large classes of codes using both analytical and numerical methods. We focus in particular on the smaller codes that will be the first used by near-term devices. We show how CSS codes form a subset of CPC codes and, more generally, how to compute stabilizers for a CPC code. As an explicit example of this framework, we give a method for turning almost any pair of classical [n,k,3] codes into a [[2n - k + 2, k, 3]] CPC code. Further, we give a simple technique for machine search which yields thousands of potential codes, and demonstrate its operation for distance 3 and 5 codes. Finally, we use the graphical tools to demonstrate how Clifford computation can be performed within CPC codes. As our framework gives a new tool for constructing small- to medium-sized codes with relatively high code rates, it provides a new source for codes that could be suitable for emerging devices, while its zx-calculus foundations enable natural integration of error correction with graphical compiler toolchains. It also provides a powerful framework for reasoning about all stabilizer quantum error correction codes of any size.Comment: Computer code associated with this paper may be found at https://doi.org/10.15128/r1bn999672

    Extrinsic information transfer charts for characterizing the iterative decoding convergence of fully parallel turbo decoders

    No full text
    Fully parallel turbo decoders (FPTDs) have been shown to offer a more-than-sixfold processing throughput and latency improvement over the conventional logarithmic Bahl–Cocke–Jelinek–Raviv (Log-BCJR) turbo decoders. Rather than requiring hundreds or even thousands of time periods to decode each frame, such as the conventional Log-BCJR turbo decoders, the FPTD completes each decoding iteration using only one or two time periods, although up to six times as many decoding iterations are required to achieve the same error correction performance. Until now, it has not been possible to explain this increased iteration requirement using an extrinsic information transfer (EXIT) chart analysis, since the two component decoders are not alternately operated in the FPTD. Hence, in this paper, we propose a novel EXIT chart technique for characterizing the iterative exchange of not only extrinsic logarithmic likelihood ratios in the FPTD, but also the iterative exchange of extrinsic state metrics. In this way, the proposed technique can accurately predict the number of decoding iterations required for achieving iterative decoding convergence, as confirmed by the Monte Carlo simulation. The proposed technique offers new insights into the operation of FPTDs, which will facilitate improved designs in the future, in the same way as the conventional EXIT charts have enhanced the design and understanding of the conventional Log-BCJR turbo decoder
    • …
    corecore