1,481 research outputs found

    New decoding scheme for LDPC codes based on simple product code structure

    Full text link
    In this paper, a new decoding scheme for low-density parity-check (LDPC) codes using the concept of simple product code structure is proposed based on combining two independently received soft-decision data for the same codeword. LDPC codes act as horizontal codes of the product codes and simple algebraic codes are used as vertical codes to help decoding of the LDPC codes. The decoding capability of the proposed decoding scheme is defined and analyzed using the paritycheck matrices of vertical codes and especially the combined-decodability is derived for the case of single parity-check (SPC) and Hamming codes being used as vertical codes. It is also shown that the proposed decoding scheme achieves much better error-correcting capability in high signal to noise ratio (SNR) region with low additional decoding complexity, compared with a conventional decoding scheme.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Improving the tolerance of stochastic LDPC decoders to overclocking-induced timing errors: a tutorial and design example

    No full text
    Channel codes such as Low-Density Parity-Check (LDPC) codes may be employed in wireless communication schemes for correcting transmission errors. This tolerance to channel-induced transmission errors allows the communication schemes to achieve higher transmission throughputs, at the cost of requiring additional processing for performing LDPC decoding. However, this LDPC decoding operation is associated with a potentially inadequate processing throughput, which may constrain the attainable transmission throughput. In order to increase the processing throughput, the clock period may be reduced, albeit this is at the cost of potentially introducing timing errors. Previous research efforts have considered a paucity of solutions for mitigating the occurrence of timing errors in channel decoders, by employing additional circuitry for detecting and correcting these overclocking-induced timing errors. Against this background, in this paper we demonstrate that stochastic LDPC decoders (LDPC-SDs) are capable of exploiting their inherent error correction capability for correcting not only transmission errors, but also timing errors, even without the requirement for additional circuitry. Motivated by this, we provide the first comprehensive tutorial on LDPC-SDs. We also propose a novel design flow for timing-error-tolerant LDPC decoders. We use this to develop a timing error model for LDPC-SDs and investigate how their overall error correction performance is affected by overclocking. Drawing upon our findings, we propose a modified LDPC-SD, having an improved timing error tolerance. In a particular practical scenario, this modification eliminates the approximately 1 dB performance degradation that is suffered by an overclocked LDPC-SD without our modification, enabling the processing throughput to be increased by up to 69.4%, which is achieved without compromising the error correction capability or processing energy consumption of the LDPC-SD

    Check-hybrid GLDPC Codes: Systematic Elimination of Trapping Sets and Guaranteed Error Correction Capability

    Full text link
    In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error correcting component code. Specifically, we follow two main purposes to construct the check-hybrid codes; first, based on the knowledge of the trapping sets of the global LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping (PBF) decoder corrects the errors on a trapping set and hence eliminates the trapping set. The second purpose is to minimize the rate loss caused by replacing the super checks through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of column-weight 3 LDPC code and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Moreover, we provide a fixed set for a class of constructed check-hybrid codes. The guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes. Finally, we investigate the eliminating of trapping sets of a column-weight 3 LDPC code using the Gallager B decoding algorithm and generalize the results obtained for the PBF for the Gallager B decoding algorithm

    An Optimal Unequal Error Protection LDPC Coded Recording System

    Full text link
    For efficient modulation and error control coding, the deliberate flipping approach imposes the run-length-limited(RLL) constraint by bit error before recording. From the read side, a high coding rate limits the correcting capability of RLL bit error. In this paper, we study the low-density parity-check (LDPC) coding for RLL constrained recording system based on the Unequal Error Protection (UEP) coding scheme design. The UEP capability of irregular LDPC codes is used for recovering flipped bits. We provide an allocation technique to limit the occurrence of flipped bits on the bit with robust correction capability. In addition, we consider the signal labeling design to decrease the number of nearest neighbors to enhance the robust bit. We also apply the density evolution technique to the proposed system for evaluating the code performances. In addition, we utilize the EXIT characteristic to reveal the decoding behavior of the recommended code distribution. Finally, the optimization approach for the best distribution is proven by differential evolution for the proposed system.Comment: 20 pages, 18 figure

    Coding with Scrambling, Concatenation, and HARQ for the AWGN Wire-Tap Channel: A Security Gap Analysis

    Full text link
    This study examines the use of nonsystematic channel codes to obtain secure transmissions over the additive white Gaussian noise (AWGN) wire-tap channel. Unlike the previous approaches, we propose to implement nonsystematic coded transmission by scrambling the information bits, and characterize the bit error rate of scrambled transmissions through theoretical arguments and numerical simulations. We have focused on some examples of Bose-Chaudhuri-Hocquenghem (BCH) and low-density parity-check (LDPC) codes to estimate the security gap, which we have used as a measure of physical layer security, in addition to the bit error rate. Based on a number of numerical examples, we found that such a transmission technique can outperform alternative solutions. In fact, when an eavesdropper (Eve) has a worse channel than the authorized user (Bob), the security gap required to reach a given level of security is very small. The amount of degradation of Eve's channel with respect to Bob's that is needed to achieve sufficient security can be further reduced by implementing scrambling and descrambling operations on blocks of frames, rather than on single frames. While Eve's channel has a quality equal to or better than that of Bob's channel, we have shown that the use of a hybrid automatic repeat-request (HARQ) protocol with authentication still allows achieving a sufficient level of security. Finally, the secrecy performance of some practical schemes has also been measured in terms of the equivocation rate about the message at the eavesdropper and compared with that of ideal codes.Comment: 29 pages, 10 figure

    Two-Bit Bit Flipping Decoding of LDPC Codes

    Full text link
    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information Theory 201

    Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels

    Full text link
    In this paper the choice of the Bernoulli distribution as biased distribution for importance sampling (IS) Monte-Carlo (MC) simulation of linear block codes over binary symmetric channels (BSCs) is studied. Based on the analytical derivation of the optimal IS Bernoulli distribution, with explicit calculation of the variance of the corresponding IS estimator, two novel algorithms for fast-simulation of linear block codes are proposed. For sufficiently high signal-to-noise ratios (SNRs) one of the proposed algorithm is SNR-invariant, i.e. the IS estimator does not depend on the cross-over probability of the channel. Also, the proposed algorithms are shown to be suitable for the estimation of the error-correcting capability of the code and the decoder. Finally, the effectiveness of the algorithms is confirmed through simulation results in comparison to standard Monte Carlo method

    Rewriting Flash Memories by Message Passing

    Get PDF
    This paper constructs WOM codes that combine rewriting and error correction for mitigating the reliability and the endurance problems in flash memory. We consider a rewriting model that is of practical interest to flash applications where only the second write uses WOM codes. Our WOM code construction is based on binary erasure quantization with LDGM codes, where the rewriting uses message passing and has potential to share the efficient hardware implementations with LDPC codes in practice. We show that the coding scheme achieves the capacity of the rewriting model. Extensive simulations show that the rewriting performance of our scheme compares favorably with that of polar WOM code in the rate region where high rewriting success probability is desired. We further augment our coding schemes with error correction capability. By drawing a connection to the conjugate code pairs studied in the context of quantum error correction, we develop a general framework for constructing error-correction WOM codes. Under this framework, we give an explicit construction of WOM codes whose codewords are contained in BCH codes.Comment: Submitted to ISIT 201
    corecore