45 research outputs found

    A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions

    Full text link
    Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).Comment: To be published at the 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers' comments and add new figure

    Reduced complexity receivers for trellis coded modulations via punctured trellis codes

    Get PDF
    We introduce a new concept, called matched punctured trellis encoding, that simplifies the complexity of Maximum Likelihood Sequence Estimation (MLSE) receivers for combined trellis encoding and modulations with memory. Matched punctured trellis encoding is applied to tamed frequency modulation (TFM) which is a bandwidth efficient correlative - FM scheme. TFM finds applications in satellite, microwave radio, and mobile communications. Our approach is based on puncturing a rate - 1/2 matched convolutional code to obtain a rate - 2/3 mismatched code. A matched code is one that produces trellis coded modulations of minimum complexity. Puncturing these codes to obtain mismatched codes of higher rates increases the complexity of the trellis coded modulations and in return one can achieve greater coding gains. However, the main idea here is that using suboptimum MLSE receivers, with just the complexity of the matched codes, good coding gains can still be achieved. Furthermore, we conclude that the new rate - 2/3 coded modulations obtained with our approach achieve greater coding gains (for same complexity comparisons) than previously published work. The new codes are obtained by exhaustive computer search techniques and coding gains of up to 5.73 dB for 32 decoder states are achieved. These new codes are good for use with TFM modulation in an AWGN channel

    Searching for high-rate convolutional codes via binary syndrome trellises

    Get PDF
    Rate R=(c-1)/c convolutional codes of constraint length nu can be represented by conventional syndrome trellises with a state complexity of s=nu or by binary syndrome trellises with a state complexity of s=nu or s=nu+1, which corresponds to at most 2^s states at each trellis level. It is shown that if the parity-check polynomials fulfill certain conditions, there exist binary syndrome trellises with optimum state complexity s=nu. The BEAST is modified to handle parity-check matrices and used to generate code tables for optimum free distance rate R=(c-1)/c, c=3,4,5, convolutional codes for conventional syndrome trellises and binary syndrome trellises with optimum state complexity. These results show that the loss in distance properties due to the optimum state complexity restriction for binary trellises is typically negligible

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676

    CRC-Aided High-Rate Convolutional Codes With Short Blocklengths for List Decoding

    Full text link
    Recently, rate-1/n zero-terminated (ZT) and tail-biting (TB) convolutional codes (CCs) with cyclic redundancy check (CRC)-aided list decoding have been shown to closely approach the random-coding union (RCU) bound for short blocklengths. This paper designs CRC polynomials for rate- (n-1)/n ZT and TB CCs with short blocklengths. This paper considers both standard rate-(n-1)/n CC polynomials and rate- (n-1)/n designs resulting from puncturing a rate-1/2 code. The CRC polynomials are chosen to maximize the minimum distance d_min and minimize the number of nearest neighbors A_(d_min) . For the standard rate-(n-1)/n codes, utilization of the dual trellis proposed by Yamada et al. lowers the complexity of CRC-aided serial list Viterbi decoding (SLVD). CRC-aided SLVD of the TBCCs closely approaches the RCU bound at a blocklength of 128. This paper compares the FER performance (gap to the RCU bound) and complexity of the CRC-aided standard and punctured ZTCCs and TBCCs. This paper also explores the complexity-performance trade-off for three TBCC decoders: a single-trellis approach, a multi-trellis approach, and a modified single-trellis approach with pre-processing using the wrap around Viterbi algorithm.Comment: arXiv admin note: substantial text overlap with arXiv:2111.0792

    Design Of Fountain Codes With Error Control

    Get PDF
    This thesis is focused on providing unequal error protection (uep) to two disjoint sources which are communicating to a comdestination via a comrelay by using distributed lt codes over a binary erasure channel (bec), and designing fountain codes with error control property by integrating lt codes with turbo codes over a binary input additive white gaussian noise (bi-awgn) channel. A simple yet efficient technique for decomposing the rsd into two entirely different degree distributions is developed and presented in this thesis. These two distributions are used to encode data symbols at the sources and the encoded symbols from the sources are selectively xored at the relay based on a suitable relay operation before the combined codeword is transmitted to the destination. By doing so, it is shown that the uep can be provided to these sources. The performance of lt codes over the awgn channel is well studied and presented in this thesis which indicates that these codes have weak error correction ability over the channel. But, errors introduced into individual symbols during the transmission of information over noisy channels need correction by some error correcting codes. Since it is found that lt codes alone are weak at correcting those errors, lt codes are integrated with turbo codes which are good error correcting codes. Therefore, the source data (symbols) are at first turbo encoded and then lt encoded and transmitted over the awgn channel. When the corrupted encoded symbols are received at receiver, lt decoding is conducted folloby turbo decoding. The overall performance of the integrated system is studied and presented in this thesis, which suggests that the errors left after lt decoding can be corrected to some extent by turbo decoder

    Rate-compatible LDPC Codes based on Primitive Polynomials and Golomb Rulers

    Full text link
    We introduce and study a family of rate-compatible Low-Density Parity-Check (LDPC) codes characterized by very simple encoders. The design of these codes starts from simplex codes, which are defined by parity-check matrices having a straightforward form stemming from the coefficients of a primitive polynomial. For this reason, we call the new codes Primitive Rate-Compatible LDPC (PRC-LDPC) codes. By applying puncturing to these codes, we obtain a bit-level granularity of their code rates. We show that, in order to achieve good LDPC codes, the underlying polynomials, besides being primitive, must meet some more stringent conditions with respect to those of classical punctured simplex codes. We leverage non-modular Golomb rulers to take the new requirements into account. We characterize the minimum distance properties of PRC-LDPC codes, and study and discuss their encoding and decoding complexity. Finally, we assess their error rate performance under iterative decoding

    Performance of turbo multi-user detectors in space-time coded DS-CDMA systems

    Get PDF
    Includes bibliographical references (leaves 118-123).In this thesis we address the problem of improving the uplink capacity and the performance of a DS-CDMA system by combining MUD and turbo decoding. These two are combined following the turbo principle. Depending on the concatenation scheme used, we divide these receivers into the Partitioned Approach (PA) and the Iterative Approach (IA) receivers. To enable the iterative exchange of information, these receivers employ a Parallel Interference Cancellation (PIC) detector as the first receiver stage

    TinyTurbo: Efficient Turbo Decoders on Edge

    Full text link
    In this paper, we introduce a neural-augmented decoder for Turbo codes called TINYTURBO . TINYTURBO has complexity comparable to the classical max-log-MAP algorithm but has much better reliability than the max-log-MAP baseline and performs close to the MAP algorithm. We show that TINYTURBO exhibits strong robustness on a variety of practical channels of interest, such as EPA and EVA channels, which are included in the LTE standards. We also show that TINYTURBO strongly generalizes across different rate, blocklengths, and trellises. We verify the reliability and efficiency of TINYTURBO via over-the-air experiments.Comment: 10 pages, 6 figures. Published at the 2022 IEEE International Symposium on Information Theory (ISIT
    corecore