10 research outputs found
Construction of Rate (n-1)/n Non-Binary LDPC Convolutional Codes via Difference Triangle Sets
This paper provides a construction of non-binary LDPC convolutional codes,
which generalizes the work of Robinson and Bernstein. The sets of integers
forming an -difference triangle set are used as supports of the
columns of rate convolutional codes. If the field size is large
enough, the Tanner graph associated to the sliding parity-check matrix of the
code is free from and -cycles not satisfying the full rank condition.
This is important for improving the performance of a code and avoiding the
presence of low-weight codewords and absorbing sets. The parameters of the
convolutional code are shown to be determined by the parameters of the
underlying difference triangle set. In particular, the free distance of the
code is related to and the degree of the code is linked to the "scope" of
the difference triangle set. Hence, the problem of finding families of
difference triangle set with minimum scope is equivalent to find convolutional
codes with small degree.Comment: The paper was submitted to ISIT 202
Construction of LDPC convolutional codes via difference triangle sets
In this paper, a construction of LDPC convolutional codes over
arbitrary finite fields, which generalizes the work of Robinson and Bernstein
and the later work of Tong is provided. The sets of integers forming a
-(weak) difference triangle set are used as supports of some columns of
the sliding parity-check matrix of an convolutional code, where
, . The parameters of the convolutional code are related
to the parameters of the underlying difference triangle set. In particular, a
relation between the free distance of the code and is established as well
as a relation between the degree of the code and the scope of the difference
triangle set. Moreover, we show that some conditions on the weak difference
triangle set ensure that the Tanner graph associated to the sliding
parity-check matrix of the convolutional code is free from -cycles not
satisfying the full rank condition over any finite field. Finally, we relax
these conditions and provide a lower bound on the field size, depending on the
parity of , that is sufficient to still avoid -cycles. This is
important for improving the performance of a code and avoiding the presence of
low-weight codewords and absorbing sets.Comment: 22 pages, Extended version of arXiv:2001.0796
LDPC Codes over the q-ary Multi-Bit Channel
In this paper, we introduce a new channel model termed as the q-ary multi-bit channel. This channel models a memory device, where q-ary symbols (q=2^s) are stored in the form of current/voltage levels. The symbols are read in a measurement process, which provides a symbol bit in each measurement step, starting from the most significant bit. An error event occurs when not all the symbol bits are known. To deal with such error events, we use GF(q) low-density parity-check (LDPC) codes and analyze their decoding performance. We start with iterative-decoding threshold analysis and derive optimal edge-label distributions for maximizing the decoding threshold. We later move to a finite-length iterative-decoding analysis and propose an edge-labeling algorithm for the improved decoding performance. We then provide a finite-length maximum-likelihood decoding analysis for both the standard non-binary random ensemble and LDPC ensembles. Finally, we demonstrate by simulations that the proposed edge-labeling algorithm improves the finite-length decoding performance by orders of magnitude
LDPC Codes over the q-ary Multi-Bit Channel
In this paper, we introduce a new channel model termed as the q-ary multi-bit channel. This channel models a memory device, where q-ary symbols (q=2^s) are stored in the form of current/voltage levels. The symbols are read in a measurement process, which provides a symbol bit in each measurement step, starting from the most significant bit. An error event occurs when not all the symbol bits are known. To deal with such error events, we use GF(q) low-density parity-check (LDPC) codes and analyze their decoding performance. We start with iterative-decoding threshold analysis and derive optimal edge-label distributions for maximizing the decoding threshold. We later move to a finite-length iterative-decoding analysis and propose an edge-labeling algorithm for the improved decoding performance. We then provide a finite-length maximum-likelihood decoding analysis for both the standard non-binary random ensemble and LDPC ensembles. Finally, we demonstrate by simulations that the proposed edge-labeling algorithm improves the finite-length decoding performance by orders of magnitude
A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage
Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels
A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage
Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels