22 research outputs found

    A Probabilistic Peeling Decoder to Efficiently Analyze Generalized LDPC Codes Over the BEC

    Get PDF
    In this paper, we analyze the tradeoff between coding rate and asymptotic performance of a class of generalized low-density parity-check (GLDPC) codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. The rate of the GLDPC ensemble is bounded using classical results on linear block codes, namely, Hamming bound and Varshamov bound. We also study the impact of the decoding method used at GC nodes. To incorporate both bounded-distance (BD) and maximum likelihood (ML) decoding at GC nodes into our analysis without resorting on multi-edge type of degree distributions (DDs), we propose the probabilistic peeling decoding (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a successful decoding probability that depends on both the GC block code and its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as density evolution (DE) or the differential equation method. Furthermore, for a class of GLDPC ensembles, we demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code under ML decoding at GC nodes. We illustrate our analysis for GLDPC code ensembles with regular and irregular DDs. In all cases, we show that a large fraction of GC nodes is required to reduce the original gap to capacity, but the optimal fraction is strictly smaller than one. We then consider techniques to further reduce the gap to capacity by means of random puncturing, and the inclusion of a certain fraction of generalized variable nodes in the graph.This work was supported in part by the Spanish Ministerio de Economía y Competitividad and the Agencia Española de Investigación under Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU) and in part by the Comunidad de Madrid in Spain under Grant S2103/ICE-2845, Grant IND2017/TIC-7618, Grant IND2018/TIC-9649, and Grant Y2018/TCS-4705. P. M. Olmos was further supported by the Spanish Ministerio de Economía y Competitividad under Grant IJCI-2014-19150. T. Koch was further supported by the European Research Council (ERC) through the European Union’s Horizon 2020 research and innovation programme under Grant 714161, by the 7th European Union Framework Programme under Grant 333680, and by the Spanish Ministerio de Economía y Competitividad under Grant TEC2013- 41718-R and Grant RYC-2014-16332

    On generalized LDPC codes for ultra reliable communication

    Get PDF
    Ultra reliable low latency communication (URLLC) is an important feature in future mobile communication systems, as they will require high data rates, large system capacity and massive device connectivity [11]. To meet such stringent requirements, many error-correction codes (ECC)s are being investigated; turbo codes, low density parity check (LDPC) codes, polar codes and convolutional codes [70, 92, 38], among many others. In this work, we present generalized low density parity check (GLDPC) codes as a promising candidate for URLLC. Our proposal is based on a novel class of GLDPC code ensembles, for which new analysis tools are proposed. We analyze the trade-o_ between coding rate and asymptotic performance of a class of GLDPC codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. To incorporate both bounded distance (BD) and maximum likelihood (ML) decoding at GC nodes into our analysis without resorting to multi-edge type of degree distribution (DD)s, we propose the probabilistic peeling decoding (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a successful decoding probability that depends on both the GC block code as well as its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as Density evolution (DE) or the differential equation method. We demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code under ML decoding at GC nodes. We illustrate our analysis for GLDPC code ensembles with regular and irregular DDs. This design methodology is applied to construct practical codes for URLLC. To this end, we incorporate to our analysis the use of quasi-cyclic (QC) structures, to mitigate the code error floor and facilitate the code very large scale integration (VLSI) implementation. Furthermore, for the additive white Gaussian noise (AWGN) channel, we analyze the complexity and performance of the message passing decoder with various update rules (including standard full-precision sum product and min-sum algorithms) and quantization schemes. The block error rate (BLER) performance of the proposed GLDPC codes, combined with a complementary outer code, is shown to outperform a variety of state-of-the-art codes, for URLLC, including LDPC codes, polar codes, turbo codes and convolutional codes, at similar complexity rates.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Juan José Murillo Fuentes.- Secretario: Matilde Pilar Sánchez Fernández.- Vocal: Javier Valls Coquilla

    On LDPC Code Ensembles with Generalized Constraints

    Get PDF
    Proceeding of: 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25-30 June, 2017In this paper, we analyze the tradeoff between coding rate and asymptotic performance of a class of generalized low-density parity-check (GLDPC) codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. The rate of the GLDPC ensemble is bounded using classical results on linear block codes, namely Hamming bound and Varshamov bound. We also study the impact of the decoding method used at GC nodes. To incorporate both bounded-distance (BD) and Maximum Likelihood (ML) decoding at GC nodes into our analysis without having to resort on multi-edge type of degree distributions (DDs), we propose the probabilistic peeling decoder (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a success probability that depends on the GC block code and its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as density evolution (DE) or the differential equation method. Furthermore, for a class of GLDPC ensembles, we demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code. We illustrate our analysis for GLDPC code ensembles using (2, 6) and (2,15) base DDs. In all cases, we show that a large fraction of GC nodes is required to reduce the original gap to capacity.This work has been funded in part by the Spanish Ministerio de Economía y Competitividad and the Agencia Española de Investigación under Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU) and by the Comunidad de Madrid in Spain under Grant S2103/ICE-2845. T. Koch has further received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, and from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013-41718-R and RYC-2014-16332. Pablo M. Olmos has further received funding from the Spanish Ministerio de Economía y Competitividad under Grant IJCI-2014-19150

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Spatially coupled generalized LDPC codes: asymptotic analysis and finite length scaling

    Get PDF
    Generalized low-density parity-check (GLDPC) codes are a class of LDPC codes in which the standard single parity check (SPC) constraints are replaced by constraints defined by a linear block code. These stronger constraints typically result in improved error floor performance, due to better minimum distance and trapping set properties, at a cost of some increased decoding complexity. In this paper, we study spatially coupled generalized low-density parity-check (SC-GLDPC) codes and present a comprehensive analysis of these codes, including: (1) an iterative decoding threshold analysis of SC-GLDPC code ensembles demonstrating capacity approaching thresholds via the threshold saturation effect; (2) an asymptotic analysis of the minimum distance and free distance properties of SC-GLDPC code ensembles, demonstrating that the ensembles are asymptotically good; and (3) an analysis of the finite-length scaling behavior of both GLDPC block codes and SC-GLDPC codes based on a peeling decoder (PD) operating on a binary erasure channel (BEC). Results are compared to GLDPC block codes, and the advantages and disadvantages of SC-GLDPC codes are discussed.This work was supported in part by the National Science Foundation under Grant ECCS-1710920, Grant OIA-1757207, and Grant HRD-1914635; in part by the European Research Council (ERC) through the European Union's Horizon 2020 research and innovation program under Grant 714161; and in part by the Spanish Ministry of Science, Innovation and University under Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU)

    A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes

    Full text link
    Spatially-coupled LDPC codes are known to have excellent asymptotic properties. Much less is known regarding their finite-length performance. We propose a scaling law to predict the error probability of finite-length spatially-coupled ensembles when transmission takes place over the binary erasure channel. We discuss how the parameters of the scaling law are connected to fundamental quantities appearing in the asymptotic analysis of these ensembles and we verify that the predictions of the scaling law fit well to the data derived from simulations over a wide range of parameters. The ultimate goal of this line of research is to develop analytic tools for the design of spatially-coupled LDPC codes under practical constraints

    Tree-Structure Expectation Propagation for LDPC Decoding over the BEC

    Full text link
    We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). EP generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regime over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law (SL) for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization

    Changing edges in graphical model algorithms

    Get PDF
    Graphical models are used to describe the interactions in structures, such as the nodes in decoding circuits, agents in small-world networks, and neurons in our brains. These structures are often not static and can change over time, resulting in removal of edges, extra nodes, or changes in weights of the links in the graphs. For example, wires in message-passing decoding circuits can be misconnected due to process variation in nanoscale manufacturing or circuit aging, the style of passes among soccer players can change based on the team's strategy, and the connections among neurons can be broken due to Alzheimer's disease. The effects of these changes in graphs can reveal useful information and inspire approaches to understand some challenging problems. In this work, we investigate the dynamic changes of edges in graphs and develop mathematical tools to analyze the effects of these changes by embedding the graphical models in two applications. The first half of the work is about the performance of message-passing LDPC decoders in the presence of permanently and transiently missing connections, which is equivalent to the removal of edges in the codes' graphical representation Tanner graphs. We prove concentration and convergence theorems that validate the use of density evolution performance analysis and conclude that arbitrarily small error probability is not possible for decoders with missing connections. However, we find suitably defined decoding thresholds for communication systems with binary erasure channels under peeling decoding, as well as binary symmetric channels under Gallager A and B decoding. We see that decoding is robust to missing wires, as decoding thresholds degrade smoothly. Surprisingly, we discovered the stochastic facilitation (SF) phenomenon in Gallager B decoders where having more missing connections helps improve the decoding thresholds under some conditions. The second half of the work is about the advantages of the semi-metric property of complex weighted networks. Nodes in graphs represent elements in systems and edges describe the level of interactions among the nodes. A semi-metric edge in a graph, which violates the triangle inequality, indicates that there is another latent relation between the pair of nodes connected by the edge. We show the equivalence between modelling a sporting event using a stochastic Markov chain and an algebraic diffusion process, and we also show that using the algebraic representation to calculate the stationary distribution of a network can preserve the graph's semi-metric property, which is lost in stochastic models. These semi-metric edges can be treated as redundancy and be pruned in the all-pairs shortest-path problems to accelerate computations, which can be applied to more complicated problems such as PageRank. We then further demonstrate the advantages of semi-metricity in graphs by showing that the percentage of semi-metric edges in the interaction graphs of two soccer teams changes linearly with the final score. Interestingly, these redundant edges can be interpreted as a measure of a team's tactics

    Good Concatenated Code Ensembles for the Binary Erasure Channel

    Full text link
    In this work, we give good concatenated code ensembles for the binary erasure channel (BEC). In particular, we consider repeat multiple-accumulate (RMA) code ensembles formed by the serial concatenation of a repetition code with multiple accumulators, and the hybrid concatenated code (HCC) ensembles recently introduced by Koller et al. (5th Int. Symp. on Turbo Codes & Rel. Topics, Lausanne, Switzerland) consisting of an outer multiple parallel concatenated code serially concatenated with an inner accumulator. We introduce stopping sets for iterative constituent code oriented decoding using maximum a posteriori erasure correction in the constituent codes. We then analyze the asymptotic stopping set distribution for RMA and HCC ensembles and show that their stopping distance hmin, defined as the size of the smallest nonempty stopping set, asymptotically grows linearly with the block length. Thus, these code ensembles are good for the BEC. It is shown that for RMA code ensembles, contrary to the asymptotic minimum distance dmin, whose growth rate coefficient increases with the number of accumulate codes, the hmin growth rate coefficient diminishes with the number of accumulators. We also consider random puncturing of RMA code ensembles and show that for sufficiently high code rates, the asymptotic hmin does not grow linearly with the block length, contrary to the asymptotic dmin, whose growth rate coefficient approaches the Gilbert-Varshamov bound as the rate increases. Finally, we give iterative decoding thresholds for the different code ensembles to compare the convergence properties.Comment: To appear in IEEE Journal on Selected Areas in Communications, special issue on Capacity Approaching Code
    corecore