170 research outputs found

    Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols

    Full text link
    In this paper, we describe direct-sequence code-division multiple-access (DS-CDMA) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding are iteratively performed without the use of any training or pilot symbols. An expectation-maximization channel-estimation algorithm for the fading amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS-CDMA systems with irregular repeat-accumulate codes. After initial estimates of the fading amplitude, phase, and interference PSD are obtained from the received symbols, subsequent values of these parameters are iteratively updated by using the soft feedback from the channel decoder. The updated estimates are combined with the received symbols and iteratively passed to the decoder. The elimination of pilot symbols simplifies the system design and allows either an enhanced information throughput, an improved bit error rate, or greater spectral efficiency. The interference-PSD estimation enables DS-CDMA systems to significantly suppress interference.Comment: To appear, IEEE Transactions on Wireless Communication

    Novel irregular LDPC codes and their application to iterative detection of MIMO systems

    Get PDF
    Low-density parity-check (LDPC) codes are among the best performing error correction codes currently known. For higher performing irregular LDPC codes, degree distributions have been found which produce codes with optimum performance in the infinite block length case. Significant performance degradation is seen at more practical short block lengths. A significant focus in the search for practical LDPC codes is to find a construction method which minimises this reduction in performance as codes approach short lengths. In this work, a novel irregular LDPC code is proposed which makes use of the SPA decoder at the design stage in order to make the best choice of edge placement with respect to iterative decoding performance in the presence of noise. This method, a modification of the progressive edge growth (PEG) algorithm for edge placement in parity-check matrix (PCM) construction is named the DOPEG algorithm. The DOPEG design algorithm is highly flexible in that the decoder optimisation stage may be applied to any modification or extension of the original PEG algorithm with relative ease. To illustrate this fact, the decoder optimisation step was applied to the IPEG modification to the PEG algorithm, which produces codes with comparatively excellent performance. This extension to the DOPEG is called the DOIPEG. A spatially multiplexed coded iteratively detected and decoded multiple-input multiple-output (MIMO) system is then considered. The MIMO system to be investigated is developed through theory and a number of results are presented which illustrate its performance characteristics. The novel DOPEG code is tested for the MIMO system under consideration and a significant performance gain is achieved

    Tight bounds for LDPC and LDGM codes under MAP decoding

    Full text link
    A new method for analyzing low density parity check (LDPC) codes and low density generator matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary input output symmetric channels. The method is first developed for Tanner graph ensembles with Poisson left degree distribution. It is then generalized to `multi-Poisson' graphs, and, by a completion procedure, to arbitrary degree distribution.Comment: 28 pages, 9 eps figures; Second version contains a generalization of the previous resul

    Spatially Coupled Turbo-Like Codes

    Get PDF
    The focus of this thesis is on proposing and analyzing a powerful class of codes on graphs---with trellis constraints---that can simultaneously approach capacity and achieve very low error floor. In particular, we propose the concept of spatial coupling for turbo-like code (SC-TC) ensembles and investigate the impact of coupling on the performance of these codes. The main elements of this study can be summarized by the following four major topics. First, we considered the spatial coupling of parallel concatenated codes (PCCs), serially concatenated codes (SCCs), and hybrid concatenated codes (HCCs).We also proposed two extensions of braided convolutional codes (BCCs) to higher coupling memories. Second, we investigated the impact of coupling on the asymptotic behavior of the proposed ensembles in term of the decoding thresholds. For that, we derived the exact density evolution (DE) equations of the proposed SC-TC ensembles over the binary erasure channel. Using the DE equations, we found the thresholds of the coupled and uncoupled ensembles under belief propagation (BP) decoding for a wide range of rates. We also computed the maximum a-posteriori (MAP) thresholds of the underlying uncoupled ensembles. Our numerical results confirm that TCs have excellent MAP thresholds, and for a large enough coupling memory, the BP threshold of an SC-TC ensemble improves to the MAP threshold of the underlying TC ensemble. This phenomenon is called threshold saturation and we proved its occurrence for SC-TCs by use of a proof technique based on the potential function of the ensembles.Third, we investigated and discussed the performance of SC-TCs in the finite length regime. We proved that under certain conditions the minimum distance of an SC-TCs is either larger or equal to that of its underlying uncoupled ensemble. Based on this fact, we performed a weight enumerator (WE) analysis for the underlying uncoupled ensembles to investigate the error floor performance of the SC-TC ensembles. We computed bounds on the error rate performance and minimum distance of the TC ensembles. These bounds indicate very low error floor for SCC, HCC, and BCC ensembles, and show that for HCC, and BCC ensembles, the minimum distance grows linearly with the input block length.The results from the DE and WE analysis demonstrate that the performance of TCs benefits from spatial coupling in both waterfall and error floor regions. While uncoupled TC ensembles with close-to-capacity performance exhibit a high error floor, our results show that SC-TCs can simultaneously approach capacity and achieve very low error floor.Fourth, we proposed a unified ensemble of TCs that includes all the considered TC classes. We showed that for each of the original classes of TCs, it is possible to find an equivalent ensemble by proper selection of the design parameters in the unified ensemble. This unified ensemble not only helps us to understand the connections and trade-offs between the TC ensembles but also can be considered as a bridge between TCs and generalized low-density parity check codes

    Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes

    Get PDF
    Motivated by distributed storage applications, we investigate the degree to which capacity achieving encodings can be efficiently updated when a single information bit changes, and the degree to which such encodings can be efficiently (i.e., locally) repaired when single encoded bit is lost. Specifically, we first develop conditions under which optimum error-correction and update-efficiency are possible, and establish that the number of encoded bits that must change in response to a change in a single information bit must scale logarithmically in the block-length of the code if we are to achieve any nontrivial rate with vanishing probability of error over the binary erasure or binary symmetric channels. Moreover, we show there exist capacity-achieving codes with this scaling. With respect to local repairability, we develop tight upper and lower bounds on the number of remaining encoded bits that are needed to recover a single lost bit of the encoding. In particular, we show that if the code-rate is ϵ\epsilon less than the capacity, then for optimal codes, the maximum number of codeword symbols required to recover one lost symbol must scale as log1/ϵ\log1/\epsilon. Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author
    corecore