107 research outputs found

    The Road From Classical to Quantum Codes: A Hashing Bound Approaching Design Procedure

    Full text link
    Powerful Quantum Error Correction Codes (QECCs) are required for stabilizing and protecting fragile qubits against the undesirable effects of quantum decoherence. Similar to classical codes, hashing bound approaching QECCs may be designed by exploiting a concatenated code structure, which invokes iterative decoding. Therefore, in this paper we provide an extensive step-by-step tutorial for designing EXtrinsic Information Transfer (EXIT) chart aided concatenated quantum codes based on the underlying quantum-to-classical isomorphism. These design lessons are then exemplified in the context of our proposed Quantum Irregular Convolutional Code (QIRCC), which constitutes the outer component of a concatenated quantum code. The proposed QIRCC can be dynamically adapted to match any given inner code using EXIT charts, hence achieving a performance close to the hashing bound. It is demonstrated that our QIRCC-based optimized design is capable of operating within 0.4 dB of the noise limit

    802.11 Payload Iterative decoding between multiple transmission attempts

    Get PDF
    Abstract. The institute of electrical and electronics engineers (IEEE) 802.11 standard specifies widely used technology for wireless local area networks (WLAN). Standard specifies high-performance physical and media access control (MAC) layers for a distributed network but lacks an effective hybrid automatic repeat request (HARQ). Currently, the standard specifies forward error correction (FEC), error detection (ED), and automatic repeat request (ARQ), but in case of decoding errors, the previously transmitted information is not used when decoding the retransmitted packet. This is called Type 1 HARQ. Type 1 HARQ uses received energy inefficiently, but the simple implementation makes it an attractive solution. Unfortunately, research applying more sophisticated HARQ schemes on top of IEEE 802.11 is limited. In this Master’s Thesis, a novel HARQ technology based on packet retransmissions that can be decoded in a turbo-like manner, keeping as much as possible compatibility with vanilla 802.11, is proposed. The proposed technology is simulated with both the IEEE 802.11 code and with the robust, efficient and smart communication in unpredictable environments (RESCUE) code. An additional interleaver is added before the convolutional encoder in the proposed technology, interleaving either the whole frame or only the payload to enable effective iterative decoding. For received frames, turbo-like iterations are done between initially transmitted packet copy and retransmissions. Results are compared against the non-iterative combining method maximizing signal-to-noise ratio (SNR), maximum ratio combining (MRC). The main design goal for this technology is to maintain compatibility with the 802.11 standard while allowing efficient HARQ. Other design goals are range extension, higher throughput, and better performance in terms of bit error rate (BER) and frame error rate (FER). This technology can be used for range extension at low SNR range and may provide up to 4 dB gain at medium SNR range compared to MRC. At high SNR, technology can reduce the penalty from retransmission allowing higher average modulation and coding scheme (MCS). However, these gains come with the cost of computational complexity from the iterative decoding. The main limiting factors of the proposed technology are decoding errors in the header and the scrambler area, and resource-hungry-processing. In simulations, perfect synchronization and packet detection is assumed, but in reality, especially at low SNR, packet detection and synchronization would be challenging. 802.11 pakettien iteratiivinen dekoodaus lähetysten välillä. Tiivistelmä. IEEE 802.11-standardi määrittelee yleisesti käytetyn teknologian langattomille lähiverkoille. Standardissa määritellään tehokas fyysinen- ja verkkoliityntäkerros hajautetuille verkoille, mutta siitä puuttuu tehokas yhdistetty automaattinen uudelleenlähetys. Nykyisellään standardi määrittelee virheenkorjaavan koodin, virheellisen paketin tunnistuksen sekä automaattisen uudelleenlähetyksen, mutta aikaisemmin lähetetyn paketin informaatiota ei käytetä hyväksi uudelleenlähetystilanteessa. Tämä menetelmä tunnetaan tyypin yksi yhdistettynä automaattisena uudelleenlähetyksenä. Tyypin yksi yhdistetty automaattinen uudelleenlähetys käyttää vastaanotettua signaalia tehottomasti, mutta yksinkertaisuus tekee siitä houkuttelevan vaihtoehdon. Valitettavasti edistyneempien uudelleenlähetysvaihtoehtojen tutkimusta 802.11-standardiin on rajoitetusti. Tässä diplomityössä esitellään uusi yhdistetty uudelleenlähetysteknologia, joka pohjautuu pakettien uudelleenlähetykseen, sallien turbo-tyylisen dekoodaamisen säilyttäen mahdollisimman hyvän taaksepäin yhteensopivuutta alkuperäisen 802.11-standardin kanssa. Tämä teknologia on simuloitu käyttäen sekä 802.11- että nk. RESCUE-virheenkorjauskoodia. Teknologiassa uusi lomittaja on lisätty konvoluutio-enkoodaajan eteen, sallien tehokkaan iteratiivisen dekoodaamisen, lomittaen joko koko paketin tai ainoastaan hyötykuorman. Vastaanotetuille paketeille tehdään turbo-tyyppinen iteraatio alkuperäisen vastaanotetun kopion ja uudelleenlähetyksien välillä. Tuloksia vertaillaan eiiteratiiviseen yhdistämismenetelmään, maksimisuhdeyhdistelyyn, joka maksimoi yhdistetyn signaali-kohinasuhteen. Tärkeimpänä suunnittelutavoitteena tässä työssä on tehokas uudelleenlähetysmenetelmä, joka ylläpitää taaksepäin yhteensopivuutta IEEE 802.11-standardin kanssa. Muita tavoitteita ovat kantaman lisäys, nopeampi yhteys ja matalampi bitti- ja pakettivirhesuhde. Kehitettyä teknologiaa voidaan käyttää kantaman lisäykseen matalan signaalikohinasuhteen vallitessa ja se on jopa 4 dB parempi kohtuullisella signaalikohinasuhteella kuin maksimisuhdeyhdistely. Korkealla signaali-kohinasuhteella teknologiaa voidaan käyttää pienentämään häviötä epäonnistuneesta paketinlähetyksestä ja täten sallien korkeamman modulaatio-koodiasteen käyttämisen. Valitettavasti nämä parannukset tulevat kasvaneen laskennallisen monimutkaisuuden kustannuksella, johtuen iteratiivisesta dekoodaamisesta. Isoimmat rajoittavat tekijät teknologian käytössä ovat dekoodausvirheet otsikossa ja datamuokkaimen siemenessä. Tämän lisäksi käyttöä rajoittaa resurssisyöppö prosessointi. Simulaatioissa oletetaan täydellinen synkronisointi, mutta todellisuudessa, erityisesti matalalla signaali-kohinasuhteella, paketin tunnistus ja synkronointi voivat olla haasteellisia

    Polar coding for optical wireless communication

    Get PDF

    Polar coding for optical wireless communication

    Get PDF

    Introduction to Forward-Error-Correcting Coding

    Get PDF
    This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered

    The deep space network, volume 9

    Get PDF
    Progress on DSN supporting research and technology is reported. Topics discussed include: descriptions of the objectives, functions, organization, facilities, and communication; Pioneer support; and advanced engineering

    VLSI decoding architectures: flexibility, robustness and performance

    Get PDF
    Stemming from previous studies on flexible LDPC decoders, this thesis work has been mainly focused on the development of flexible turbo and LDPC decoder designs, and on the narrowing of the power, area and speed gap they might present with respect to dedicated solutions. Additional studies have been carried out within the field of increased code performance and of decoder resiliency to hardware errors. The first chapter regroups several main contributions in the design and implementation of flexible channel decoders. The first part concerns the design of a Network-on-Chip (NoC) serving as an interconnection network for a partially parallel LDPC decoder. A best-fit NoC architecture is designed and a complete multi-standard turbo/LDPC decoder is designed and implemented. Every time the code is changed, the decoder must be reconfigured. A number of variables influence the duration of the reconfiguration process, starting from the involved codes down to decoder design choices. These are taken in account in the flexible decoder designed, and novel traffic reduction and optimization methods are then implemented. In the second chapter a study on the early stopping of iterations for LDPC decoders is presented. The energy expenditure of any LDPC decoder is directly linked to the iterative nature of the decoding algorithm. We propose an innovative multi-standard early stopping criterion for LDPC decoders that observes the evolution of simple metrics and relies on on-the-fly threshold computation. Its effectiveness is evaluated against existing techniques both in terms of saved iterations and, after implementation, in terms of actual energy saving. The third chapter portrays a study on the resilience of LDPC decoders under the effect of memory errors. Given that the purpose of channel decoders is to correct errors, LDPC decoders are intrinsically characterized by a certain degree of resistance to hardware faults. This characteristic, together with the soft nature of the stored values, results in LDPC decoders being affected differently according to the meaning of the wrong bits: ad-hoc error protection techniques, like the Unequal Error Protection devised in this chapter, can consequently be applied to different bits according to their significance. In the fourth chapter the serial concatenation of LDPC and turbo codes is presented. The concatenated FEC targets very high error correction capabilities, joining the performance of turbo codes at low SNR with that of LDPC codes at high SNR, and outperforming both current deep-space FEC schemes and concatenation-based FECs. A unified decoder for the concatenated scheme is subsequently propose

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Hardware implementation aspects of polar decoders and ultra high-speed LDPC decoders

    Get PDF
    The goal of channel coding is to detect and correct errors that appear during the transmission of information. In the past few decades, channel coding has become an integral part of most communications standards as it improves the energy-efficiency of transceivers manyfold while only requiring a modest investment in terms of the required digital signal processing capabilities. The most commonly used channel codes in modern standards are low-density parity-check (LDPC) codes and Turbo codes, which were the first two types of codes to approach the capacity of several channels while still being practically implementable in hardware. The decoding algorithms for LDPC codes, in particular, are highly parallelizable and suitable for high-throughput applications. A new class of channel codes, called polar codes, was introduced recently. Polar codes have an explicit construction and low-complexity encoding and successive cancellation (SC) decoding algorithms. Moreover, polar codes are provably capacity achieving over a wide range of channels, making them very attractive from a theoretical perspective. Unfortunately, polar codes under standard SC decoding cannot compete with the LDPC and Turbo codes that are used in current standards in terms of their error-correcting performance. For this reason, several improved SC-based decoding algorithms have been introduced. The most prominent SC-based decoding algorithm is the successive cancellation list (SCL) decoding algorithm, which is powerful enough to approach the error-correcting performance of LDPC codes. The original SCL decoding algorithm was described in an arithmetic domain that is not well-suited for hardware implementations and is not clear how an efficient SCL decoder architecture can be implemented. To this end, in this thesis, we re-formulate the SCL decoding algorithm in two distinct arithmetic domains, we describe efficient hardware architectures to implement the resulting SCL decoders, and we compare the decoders with existing LDPC and Turbo decoders in terms of their error-correcting performance and their implementation efficiency. Due to the ongoing technology scaling, the feature sizes of integrated circuits keep shrinking at a remarkable pace. As transistors and memory cells keep shrinking, it becomes increasingly difficult and costly (in terms of both area and power) to ensure that the implemented digital circuits always operate correctly. Thus, manufactured digital signal processing circuits, including channel decoder circuits, may not always operate correctly. Instead of discarding these faulty dies or using costly circuit-level fault mitigation mechanisms, an alternative approach is to try to live with certain malfunctions, provided that the algorithm implemented by the circuit is sufficiently fault-tolerant. In this spirit, in this thesis we examine decoding of polar codes and LDPC codes under the assumption that the memories that are used within the decoders are not fully reliable. We show that, in both cases, there is inherent fault-tolerance and we also propose some methods to reduce the effect of memory faults on the error-correcting performance of the considered decoders
    corecore