105 research outputs found

    Mutual Information-Maximizing Quantized Belief Propagation Decoding of Regular LDPC Codes

    Full text link
    In mutual information-maximizing lookup table (MIM-LUT) decoding of low-density parity-check (LDPC) codes, table lookup operations are used to replace arithmetic operations. In practice, large tables need to be decomposed into small tables to save the memory consumption, at the cost of degraded error performance. In this paper, we propose a method, called mutual information-maximizing quantized belief propagation (MIM-QBP) decoding, to remove the lookup tables used for MIM-LUT decoding. Our method leads to a very efficient decoder, namely the MIM-QBP decoder, which can be implemented based only on simple mappings and fixed-point additions. Simulation results show that the MIM-QBP decoder can always considerably outperform the state-of-the-art MIM-LUT decoder, mainly because it can avoid the performance loss due to table decomposition. Furthermore, the MIM-QBP decoder with only 3 bits per message can outperform the floating-point belief propagation (BP) decoder at high signal-to-noise ratio (SNR) regions when testing on high-rate codes with a maximum of 10-30 iterations

    A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage

    Get PDF
    Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels
    • …
    corecore