358 research outputs found

    A Simple Proof of Maxwell Saturation for Coupled Scalar Recursions

    Full text link
    Low-density parity-check (LDPC) convolutional codes (or spatially-coupled codes) were recently shown to approach capacity on the binary erasure channel (BEC) and binary-input memoryless symmetric channels. The mechanism behind this spectacular performance is now called threshold saturation via spatial coupling. This new phenomenon is characterized by the belief-propagation threshold of the spatially-coupled ensemble increasing to an intrinsic noise threshold defined by the uncoupled system. In this paper, we present a simple proof of threshold saturation that applies to a wide class of coupled scalar recursions. Our approach is based on constructing potential functions for both the coupled and uncoupled recursions. Our results actually show that the fixed point of the coupled recursion is essentially determined by the minimum of the uncoupled potential function and we refer to this phenomenon as Maxwell saturation. A variety of examples are considered including the density-evolution equations for: irregular LDPC codes on the BEC, irregular low-density generator matrix codes on the BEC, a class of generalized LDPC codes with BCH component codes, the joint iterative decoding of LDPC codes on intersymbol-interference channels with erasure noise, and the compressed sensing of random vectors with i.i.d. components.Comment: This article is an extended journal version of arXiv:1204.5703 and has now been accepted to the IEEE Transactions on Information Theory. This version adds additional explanation for some details and also corrects a number of small typo

    Deterministic and Ensemble-Based Spatially-Coupled Product Codes

    Get PDF
    Several authors have proposed spatially-coupled (or convolutional-like) variants of product codes (PCs). In this paper, we focus on a parametrized family of generalized PCs that recovers some of these codes (e.g., staircase and block-wise braided codes) as special cases and study the iterative decoding performance over the binary erasure channel. Even though our code construction is deterministic (and not based on a randomized ensemble), we show that it is still possible to rigorously derive the density evolution (DE) equations that govern the asymptotic performance. The obtained DE equations are then compared to those for a related spatially-coupled PC ensemble. In particular, we show that there exists a family of (deterministic) braided codes that follows the same DE equation as the ensemble, for any spatial length and coupling width.Comment: accepted at ISIT 2016, Barcelona, Spai

    Coordinated design of coding and modulation systems

    Get PDF
    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems

    Formally Unimodular Packings for the Gaussian Wiretap Channel

    Full text link
    This paper introduces the family of lattice-like packings, which generalizes lattices, consisting of packings possessing periodicity and geometric uniformity. The subfamily of formally unimodular (lattice-like) packings is further investigated. It can be seen as a generalization of the unimodular and isodual lattices, and the Construction A formally unimodular packings obtained from formally self-dual codes are presented. Recently, lattice coding for the Gaussian wiretap channel has been considered. A measure called secrecy function was proposed to characterize the eavesdropper's probability of correctly decoding. The aim is to determine the global maximum value of the secrecy function, called (strong) secrecy gain. We further apply lattice-like packings to coset coding for the Gaussian wiretap channel and show that the family of formally unimodular packings shares the same secrecy function behavior as unimodular and isodual lattices. We propose a universal approach to determine the secrecy gain of a Construction A formally unimodular packing obtained from a formally self-dual code. From the weight distribution of a code, we provide a necessary condition for a formally self-dual code such that its Construction A formally unimodular packing is secrecy-optimal. Finally, we demonstrate that formally unimodular packings/lattices can achieve higher secrecy gain than the best-known unimodular lattices.Comment: Accepted for publication in IEEE Transactions on Information Theory. arXiv admin note: text overlap with arXiv:2111.0143

    A rate R=5/20 hypergraph-based woven convolutional code with free distance 120

    Get PDF
    A rate R=5/20 hypergraph-based woven convolu- tional code with overall constraint length 67 and constituent con- volutional codes is presented. It is based on a 3-partite, 3-uniform, 4-regular hypergraph and contains rate R=3/4 constituent convolutional codes with overall constraint length 5. Although the code construction is based on low-complexity codes, the free distance of this construction, computed with the BEAST algorithm, is dfree=120, which is remarkably large

    Constructions of Generalized Concatenated Codes and Their Trellis-Based Decoding Complexity

    Get PDF
    In this correspondence, constructions of generalized concatenated (GC) codes with good rates and distances are presented. Some of the proposed GC codes have simpler trellis omplexity than Euclidean geometry (EG), Reed–Muller (RM), or Bose–Chaudhuri–Hocquenghem (BCH) codes of approximately the same rates and minimum distances, and in addition can be decoded with trellis-based multistage decoding up to their minimum distances. Several codes of the same length, dimension, and minimum distance as the best linear codes known are constructed

    A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes

    Full text link
    Spatially-coupled LDPC codes are known to have excellent asymptotic properties. Much less is known regarding their finite-length performance. We propose a scaling law to predict the error probability of finite-length spatially-coupled ensembles when transmission takes place over the binary erasure channel. We discuss how the parameters of the scaling law are connected to fundamental quantities appearing in the asymptotic analysis of these ensembles and we verify that the predictions of the scaling law fit well to the data derived from simulations over a wide range of parameters. The ultimate goal of this line of research is to develop analytic tools for the design of spatially-coupled LDPC codes under practical constraints

    Spatially Coupled Turbo-Like Codes

    Get PDF
    The focus of this thesis is on proposing and analyzing a powerful class of codes on graphs---with trellis constraints---that can simultaneously approach capacity and achieve very low error floor. In particular, we propose the concept of spatial coupling for turbo-like code (SC-TC) ensembles and investigate the impact of coupling on the performance of these codes. The main elements of this study can be summarized by the following four major topics. First, we considered the spatial coupling of parallel concatenated codes (PCCs), serially concatenated codes (SCCs), and hybrid concatenated codes (HCCs).We also proposed two extensions of braided convolutional codes (BCCs) to higher coupling memories. Second, we investigated the impact of coupling on the asymptotic behavior of the proposed ensembles in term of the decoding thresholds. For that, we derived the exact density evolution (DE) equations of the proposed SC-TC ensembles over the binary erasure channel. Using the DE equations, we found the thresholds of the coupled and uncoupled ensembles under belief propagation (BP) decoding for a wide range of rates. We also computed the maximum a-posteriori (MAP) thresholds of the underlying uncoupled ensembles. Our numerical results confirm that TCs have excellent MAP thresholds, and for a large enough coupling memory, the BP threshold of an SC-TC ensemble improves to the MAP threshold of the underlying TC ensemble. This phenomenon is called threshold saturation and we proved its occurrence for SC-TCs by use of a proof technique based on the potential function of the ensembles.Third, we investigated and discussed the performance of SC-TCs in the finite length regime. We proved that under certain conditions the minimum distance of an SC-TCs is either larger or equal to that of its underlying uncoupled ensemble. Based on this fact, we performed a weight enumerator (WE) analysis for the underlying uncoupled ensembles to investigate the error floor performance of the SC-TC ensembles. We computed bounds on the error rate performance and minimum distance of the TC ensembles. These bounds indicate very low error floor for SCC, HCC, and BCC ensembles, and show that for HCC, and BCC ensembles, the minimum distance grows linearly with the input block length.The results from the DE and WE analysis demonstrate that the performance of TCs benefits from spatial coupling in both waterfall and error floor regions. While uncoupled TC ensembles with close-to-capacity performance exhibit a high error floor, our results show that SC-TCs can simultaneously approach capacity and achieve very low error floor.Fourth, we proposed a unified ensemble of TCs that includes all the considered TC classes. We showed that for each of the original classes of TCs, it is possible to find an equivalent ensemble by proper selection of the design parameters in the unified ensemble. This unified ensemble not only helps us to understand the connections and trade-offs between the TC ensembles but also can be considered as a bridge between TCs and generalized low-density parity check codes

    Computational Intelligence and Complexity Measures for Chaotic Information Processing

    Get PDF
    This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value
    • …
    corecore