731 research outputs found

    A Systematic Approach to Incremental Redundancy over Erasure Channels

    Full text link
    As sensing and instrumentation play an increasingly important role in systems controlled over wired and wireless networks, the need to better understand delay-sensitive communication becomes a prime issue. Along these lines, this article studies the operation of data links that employ incremental redundancy as a practical means to protect information from the effects of unreliable channels. Specifically, this work extends a powerful methodology termed sequential differential optimization to choose near-optimal block sizes for hybrid ARQ over erasure channels. In doing so, an interesting connection between random coding and well-known constants in number theory is established. Furthermore, results show that the impact of the coding strategy adopted and the propensity of the channel to erase symbols naturally decouple when analyzing throughput. Overall, block size selection is motivated by normal approximations on the probability of decoding success at every stage of the incremental transmission process. This novel perspective, which rigorously bridges hybrid ARQ and coding, offers a pragmatic means to select code rates and blocklengths for incremental redundancy.Comment: 7 pages, 2 figures; A shorter version of this article will appear in the proceedings of ISIT 201

    Low-rate coding using incremental redundancy for GLDPC codes

    Get PDF
    In this paper we propose a low-rate coding method, suited for application-layer forward error correction. Depending on channel conditions, the coding scheme we propose can switch from a fixed-rate LDPC code to various low-rate GLDPC codes. The source symbols are first encoded by using a staircase or triangular LDPC code. If additional symbols are needed, the encoder is then switched to the GLDPC mode and extra-repair symbols are produced, on demand. In order to ensure small overheads, we consider irregular distributions of extra-repair symbols optimized by density evolution techniques. We also show that increasing the number of extra-repair symbols improves the successful decoding probability, which becomes very close to 1 for sufficiently many extra-repair symbols

    Fountain coding with decoder side information

    Get PDF
    In this contribution, we consider the application of Digital Fountain (DF) codes to the problem of data transmission when side information is available at the decoder. The side information is modelled as a "virtual" channel output when original information sequence is the input. For two cases of the system model, which model both the virtual and the actual transmission channel either as a binary erasure channel or as a binary input additive white Gaussian noise (BIAWGN) channel, we propose methods of enhancing the design of standard non-systematic DF codes by optimizing their output degree distribution based oil the side information assumption. In addition, a systematic Raptor design has been employed as a possible solution to the problem

    Myths and Realities of Rateless Coding

    No full text
    Fixed-rate and rateless channel codes are generally treated separately in the related research literature and so, a novice in the field inevitably gets the impression that these channel codes are unrelated. By contrast, in this treatise, we endeavor to further develop a link between the traditional fixed-rate codes and the recently developed rateless codes by delving into their underlying attributes. This joint treatment is beneficial for two principal reasons. First, it facilitates the task of researchers and practitioners, who might be familiar with fixed-rate codes and would like to jump-start their understanding of the recently developed concepts in the rateless reality. Second, it provides grounds for extending the use of the well-understood code design tools — originally contrived for fixed-rate codes — to the realm of rateless codes. Indeed, these versatile tools proved to be vital in the design of diverse fixed-rate-coded communications systems, and thus our hope is that they will further elucidate the associated performance ramifications of the rateless coded schemes

    Isn't Hybrid ARQ Sufficient?

    Full text link
    In practical systems, reliable communication is often accomplished by coding at different network layers. We question the necessity of this approach and examine when it can be beneficial. Through conceptually simple probabilistic models (based on coin tossing), we argue that multicast scenarios and protocol restrictions may make concatenated multi-layer coding preferable to physical layer coding alone, which is mostly not the case in point-to-point communications.Comment: Paper presented at Allerton Conference 201

    Multi-level Turbo Decoding Assisted Soft Combining Aided Hybrid ARQ

    No full text
    Hybrid Automatic Repeat reQuest (ARQ) plays an essential role in error control. Combining the incorrectly received packet replicas in hybrid ARQ has been shown to reduce the resultant error probability, while improving the achievable throughput. Hence, in this contribution, multi-level turbo codes have been amalgamated both with hybrid ARQ and efficient soft combining techniques for taking into account the Log- Likelihood Ratios (LLRs) of retransmitted packet replicas. In this paper, we present a soft combining aided hybrid ARQ scheme based on multi-level turbo codes, which avoid the capacity loss of the twin-level turbo codes that are typically employed in hybrid ARQ schemes. More specifically, the proposed receiver dynamically appends an additional parallel concatenated Bahl, Cocke, Jelinek and Raviv (BCJR) algorithm based decoder in order to fully exploit each retransmission, thereby forming a multi-level turbo decoder. Therefore, all the extrinsic information acquired during the previous BCJR operations will be used as a priori information by the additional BCJR decoders, whilst their soft output iteratively enhances the a posteriori information generated by the previous decoding stages. We also present link- level Packet Loss Ratio (PLR) and throughput results, which demonstrate that our scheme outperforms some of the previously proposed benchmarks

    Design of rate-compatible structured LDPC codes for hybrid ARQ applications

    Get PDF
    In this paper, families of rate-compatible protograph-based LDPC codes that are suitable for incremental-redundancy hybrid ARQ applications are constructed. A systematic technique to construct low-rate base codes from a higher rate code is presented. The base codes are designed to be robust against erasures while having a good performance on error channels. A progressive node puncturing algorithm is devised to construct a family of higher rate codes from the base code. The performance of this puncturing algorithm is compared to other puncturing schemes. Using the techniques in this paper, one can construct a rate-compatible family of codes with rates ranging from 0.1 to 0.9 that are within 1 dB from the channel capacity and have good error floors
    corecore