20 research outputs found

    Design of a GF(64)-LDPC Decoder Based on the EMS Algorithm

    No full text
    International audienceThis paper presents the architecture, performance and implementation results of a serial GF(64)-LDPC decoder based on a reduced-complexity version of the Extended Min-Sum algorithm. The main contributions of this work correspond to the variable node processing, the codeword decision and the elementary check node processing. Post-synthesis area results show that the decoder area is less than 20% of a Virtex 4 FPGA for a decoding throughput of 2.95 Mbps. The implemented decoder presents performance at less than 0.7 dB from the Belief Propagation algorithm for different code lengths and rates. Moreover, the proposed architecture can be easily adapted to decode very high Galois Field orders, such as GF(4096) or higher, by slightly modifying a marginal part of the design

    New Algorithms for High-Throughput Decoding with Low-Density Parity-Check Codes using Fixed-Point SIMD Processors

    Get PDF
    Most digital signal processors contain one or more functional units with a single-instruction, multiple-data architecture that supports saturating fixed-point arithmetic with two or more options for the arithmetic precision. The processors designed for the highest performance contain many such functional units connected through an on-chip network. The selection of the arithmetic precision provides a trade-off between the task-level throughput and the quality of the output of many signal-processing algorithms, and utilization of the interconnection network during execution of the algorithm introduces a latency that can also limit the algorithm\u27s throughput. In this dissertation, we consider the turbo-decoding message-passing algorithm for iterative decoding of low-density parity-check codes and investigate its performance in parallel execution on a processor of interconnected functional units employing fast, low-precision fixed-point arithmetic. It is shown that the frequent occurrence of saturation when 8-bit signed arithmetic is used severely degrades the performance of the algorithm compared with decoding using higher-precision arithmetic. A technique of limiting the magnitude of certain intermediate variables of the algorithm, the extrinsic values, is proposed and shown to eliminate most occurrences of saturation, resulting in performance with 8-bit decoding nearly equal to that achieved with higher-precision decoding. We show that the interconnection latency can have a significant detrimental effect of the throughput of the turbo-decoding message-passing algorithm, which is illustrated for a type of high-performance digital signal processor known as a stream processor. Two alternatives to the standard schedule of message-passing and parity-check operations are proposed for the algorithm. Both alternatives markedly reduce the interconnection latency, and both result in substantially greater throughput than the standard schedule with no increase in the probability of error

    Non-Clifford and parallelizable fault-tolerant logical gates on constant and almost-constant rate homological quantum LDPC codes via higher symmetries

    Full text link
    We study parallel fault-tolerant quantum computing for families of homological quantum low-density parity-check (LDPC) codes defined on 3-manifolds with constant or almost-constant encoding rate. We derive generic formula for a transversal TT gate of color codes on general 3-manifolds, which acts as collective non-Clifford logical CCZ gates on any triplet of logical qubits with their logical-XX membranes having a Z2\mathbb{Z}_2 triple intersection at a single point. The triple intersection number is a topological invariant, which also arises in the path integral of the emergent higher symmetry operator in a topological quantum field theory: the Z23\mathbb{Z}_2^3 gauge theory. Moreover, the transversal SS gate of the color code corresponds to a higher-form symmetry supported on a codimension-1 submanifold, giving rise to exponentially many addressable and parallelizable logical CZ gates. We have developed a generic formalism to compute the triple intersection invariants for 3-manifolds and also study the scaling of the Betti number and systoles with volume for various 3-manifolds, which translates to the encoding rate and distance. We further develop three types of LDPC codes supporting such logical gates: (1) A quasi-hyperbolic code from the product of 2D hyperbolic surface and a circle, with almost-constant rate k/n=O(1/log⁥(n))k/n=O(1/\log(n)) and O(log⁥(n))O(\log(n)) distance; (2) A homological fibre bundle code with O(1/log⁥12(n))O(1/\log^{\frac{1}{2}}(n)) rate and O(log⁥12(n))O(\log^{\frac{1}{2}}(n)) distance; (3) A specific family of 3D hyperbolic codes: the Torelli mapping torus code, constructed from mapping tori of a pseudo-Anosov element in the Torelli subgroup, which has constant rate while the distance scaling is currently unknown. We then show a generic constant-overhead scheme for applying a parallelizable universal gate set with the aid of logical-XX measurements.Comment: 40 pages, 31 figure

    Advances in Modeling and Signal Processing for Bit-Patterned Magnetic Recording Channels with Written-In Errors

    Get PDF
    In the past perpendicular magnetic recording on continuous media has served as the storage mechanism for the hard-disk drive (HDD) industry, allowing for growth in areal densities approaching 0.5 Tb/in2. Under the current system design, further increases are limited by the superparamagnetic effect where the medium's thermal energy destabilizes the individual bit domains used for storage. In order to provide for future growth in the area of magnetic recording for disk drives, a number of various technology shifts have been proposed and are currently undergoing considerable research. One promising option involves switching to a discrete medium in the form of individual bit islands, termed bit-patterned magnetic recording (BPMR).When switching from a continuous to a discrete media, the problems encountered become substantial for every aspect of the hard-disk drive design. In this dissertation the complications in modeling and signal processing for bit-patterned magnetic recording are investigated where the write and read processes along with the channel characteristics present considerable challenges. For a target areal density of 4 Tb/in2, the storage process is hindered by media noise, two-dimensional (2D) intersymbol interference (ISI), electronics noise and written-in errors introduced during the write process. Thus there is a strong possibility that BPMR may prove intractable as a future HDD technology at high areal densities because the combined negative effects of the many error sources produces an environment where current signal processing techniques cannot accurately recover the stored data. The purpose here is to exploit advanced methods of detection and error correction to show that data can be effectively recovered from a BPMR channel in the presence of multiple error sources at high areal densities.First a practical model for the readback response of an individual island is established that is capable of representing its 2D nature with a Gaussian pulse. Various characteristics of the readback pulse are shown to emerge as it is subjected to the degradation of 2D media noise. The writing of the bits within a track is also investigated with an emphasis on the write process's ability to inject written-in errors in the data stream resulting from both a loss of synchronization of the write clock and the interaction of the local-scale magnetic fields under the influence of the applied write field.To facilitate data recovery in the presence of BPMR's major degradations, various detection and error-correction methods are utilized. For single-track equalization of the channel output, noise prediction is incorporated to assist detection with increased levels of media noise. With large detrimental amounts of 2D ISI and media noise present in the channel at high areal densities, a 2D approach known as multi-track detection is investigated where multiple tracks are sensed by the read heads and then used to extract information on the target track. For BPMR the output of the detector still possesses the uncorrected written-in errors. Powerful error-correction codes based on finite geometries are employed to help recover the original data stream. Increased error-correction is sought by utilizing two-fold EG codes in combination with a form of automorphism decoding known as auto-diversity. Modifications to the parity-check matrices of the error-correction codes are also investigated for the purpose of attempting more practical applications of the decoding algorithms based on belief propagation. Under the proposed techniques it is shown that effective data recovery is possible at an areal density of 4 Tb/in2 in the presence of all significant error sources except for insertions and deletions. Data recovery from the BPMR channel with insertions and deletions remains an open problem

    On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes

    Get PDF
    Edited version embargoed until 22.03.2019 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed. The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-Âœ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate Âœ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown

    Design of low-density parity-check codes for magnetic recording channels.

    Get PDF
    A technique for designing low-density parity-check (LDPC) error correcting codes for use with the partial-response channels commonly used in magnetic recording is presented. This technique combines the well-known density evolution method of Richardson and Urbanke for analyzing the performance of the LDPC decoder with a newly developed method for doing density evolution analysis of the Bahl-Cocke-Jelinek-Raviv (BCJR) channel decoder to predict the performance of LDPC codes in systems that employ both LDPC and BCJR decoders, and to search for good codes. We present examples of codes that perform 0.3dB to 0.5dB better than the regular column weight three codes employed in previous work.A new algorithm is also presented, which we call "MTR enforcement". Typical magnetic recording systems employ not just an error correcting code, but also some form of run-length-limited code or maximum-transition-run (MTR) code. The MTR enforcement algorithm allows us to exploit the added redundancy imposed by the MTR code to increase performance over that of a magnetic recording system which does not employ the MTR enforcer. We show a gain of approximately 0.5dB from the MTR enforcer in a typical magnetic recording system. We also discuss methods of doing so-called "soft-error estimates", which attempt to extrapolate the bit-error-rate (BER) curve from Monte Carlo simulations down below the limits for which the traditional BER results are valid. The recent work by Yedidia on generalizations of the belief propagation algorithm is discussed, and we consider problems that arise in using this generalized belief propagation method for decoding LDPC codes

    Enumerative sphere shaping techniques for short blocklength wireless communications

    Get PDF
    corecore