345 research outputs found

    Lowering the Error Floor of LDPC Codes Using Cyclic Liftings

    Full text link
    Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle cc of length â„“(c)\ell(c) in the base graph such that the inverse image of cc in the lifted graph consists of only cycles of length strictly larger than â„“(c)\ell(c). The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel

    Density Evolution for Asymmetric Memoryless Channels

    Full text link
    Density evolution is one of the most powerful analytical tools for low-density parity-check (LDPC) codes and graph codes with message passing decoding algorithms. With channel symmetry as one of its fundamental assumptions, density evolution (DE) has been widely and successfully applied to different channels, including binary erasure channels, binary symmetric channels, binary additive white Gaussian noise channels, etc. This paper generalizes density evolution for non-symmetric memoryless channels, which in turn broadens the applications to general memoryless channels, e.g. z-channels, composite white Gaussian noise channels, etc. The central theorem underpinning this generalization is the convergence to perfect projection for any fixed size supporting tree. A new iterative formula of the same complexity is then presented and the necessary theorems for the performance concentration theorems are developed. Several properties of the new density evolution method are explored, including stability results for general asymmetric memoryless channels. Simulations, code optimizations, and possible new applications suggested by this new density evolution method are also provided. This result is also used to prove the typicality of linear LDPC codes among the coset code ensemble when the minimum check node degree is sufficiently large. It is shown that the convergence to perfect projection is essential to the belief propagation algorithm even when only symmetric channels are considered. Hence the proof of the convergence to perfect projection serves also as a completion of the theory of classical density evolution for symmetric memoryless channels.Comment: To appear in the IEEE Transactions on Information Theor

    Effects of Single-Cycle Structure on Iterative Decoding for Low-Density Parity-Check Codes

    Full text link
    We consider communication over the binary erasure channel (BEC) using low-density parity-check (LDPC) codes and belief propagation (BP) decoding. For fixed numbers of BP iterations, the bit error probability approaches a limit as blocklength tends to infinity, and the limit is obtained via density evolution. On the other hand, the difference between the bit error probability of codes with blocklength nn and that in the large blocklength limit is asymptotically α(ϵ,t)/n+Θ(n−2)\alpha(\epsilon,t)/n + \Theta(n^{-2}) where α(ϵ,t)\alpha(\epsilon,t) denotes a specific constant determined by the code ensemble considered, the number tt of iterations, and the erasure probability ϵ\epsilon of the BEC. In this paper, we derive a set of recursive formulas which allows evaluation of the constant α(ϵ,t)\alpha(\epsilon,t) for standard irregular ensembles. The dominant difference α(ϵ,t)/n\alpha(\epsilon,t)/n can be considered as effects of cycle-free and single-cycle structures of local graphs. Furthermore, it is confirmed via numerical simulations that estimation of the bit error probability using α(ϵ,t)\alpha(\epsilon,t) is accurate even for small blocklengths.Comment: 16 pages, 7 figures, submitted to IEEE Transactions on Information Theor

    A Message-Passing Algorithm for Counting Short Cycles in a Graph

    Full text link
    A message-passing algorithm for counting short cycles in a graph is presented. For bipartite graphs, which are of particular interest in coding, the algorithm is capable of counting cycles of length g, g +2,..., 2g - 2, where g is the girth of the graph. For a general (non-bipartite) graph, cycles of length g; g + 1, ..., 2g - 1 can be counted. The algorithm is based on performing integer additions and subtractions in the nodes of the graph and passing extrinsic messages to adjacent nodes. The complexity of the proposed algorithm grows as O(g∣E∣2)O(g|E|^2), where ∣E∣|E| is the number of edges in the graph. For sparse graphs, the proposed algorithm significantly outperforms the existing algorithms in terms of computational complexity and memory requirements.Comment: Submitted to IEEE Trans. Inform. Theory, April 21, 2010

    Upper Bounds on the Rate of Low Density Stabilizer Codes for the Quantum Erasure Channel

    Full text link
    Using combinatorial arguments, we determine an upper bound on achievable rates of stabilizer codes used over the quantum erasure channel. This allows us to recover the no-cloning bound on the capacity of the quantum erasure channel, R is below 1-2p, for stabilizer codes: we also derive an improved upper bound of the form : R is below 1-2p-D(p) with a function D(p) that stays positive for 0 < p < 1/2 and for any family of stabilizer codes whose generators have weights bounded from above by a constant - low density stabilizer codes. We obtain an application to percolation theory for a family of self-dual tilings of the hyperbolic plane. We associate a family of low density stabilizer codes with appropriate finite quotients of these tilings. We then relate the probability of percolation to the probability of a decoding error for these codes on the quantum erasure channel. The application of our upper bound on achievable rates of low density stabilizer codes gives rise to an upper bound on the critical probability for these tilings.Comment: 32 page

    Low Density Graph Codes And Novel Optimization Strategies For Information Transfer Over Impaired Medium

    Get PDF
    Effective methods for information transfer over an imperfect medium are of great interest. This thesis addresses the following four topics involving low density graph codes and novel optimization strategies.Firstly, we study the performance of a promising coding technique: low density generator matrix (LDGM) codes. LDGM codes provide satisfying performance while maintaining low encoding and decoding complexities. In the thesis, the performance of LDGM codes is extracted for both majority-rule-based and sum-product iterative decoding algorithms. The ultimate performance of the coding scheme is revealed through distance spectrum analysis. We derive the distance spectral for both LDGM codes and concatenated LDGM codes. The results show that serial-concatenated LDGM codes deliver extremely low error-floors. This work provides valued information for selecting the parameters of LDGM codes. Secondly, we investigate network-coding on relay-assisted wireless multiple access (WMA) networks. Network-coding is an effective way to increase robustness and traffic capacity of networks. Following the framework of network-coding, we introduce new network codes for the WMA networks. The codes are constructed based on sparse graphs, and can explore the diversities available from both the time and space domains. The data integrity from relays could be compromised when the relays are deployed in open areas. For this, we propose a simple but robust security mechanism to verify the data integrity.Thirdly, we study the problem of bandwidth allocation for the transmission of multiple sources of data over a single communication medium. We aim to maximize the overall user satisfaction, and formulate an optimization problem. Using either the logarithmic or exponential form of satisfaction function, we derive closed-form optimal solutions, and show that the optimal bandwidth allocation for each type of data is piecewise linear with respect to the total available bandwidth. Fourthly, we consider the optimization strategy on recovery of target spectrum for filter-array-based spectrometers. We model the spectrophotometric system as a communication system, in which the information content of the target spectrum is passed through distortive filters. By exploiting non-negative nature of spectral content, a non-negative least-square optimal criterion is found particularly effective. The concept is verified in a hardware implemen

    A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage

    Get PDF
    Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbates the error floor problem by widening the spectrum of problematic objects that contributes to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. Additionally, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with soft information, and spatially-coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels
    • …
    corecore