26 research outputs found

    Deriving Good LDPC Convolutional Codes from LDPC Block Codes

    Full text link
    Low-density parity-check (LDPC) convolutional codes are capable of achieving excellent performance with low encoding and decoding complexity. In this paper we discuss several graph-cover-based methods for deriving families of time-invariant and time-varying LDPC convolutional codes from LDPC block codes and show how earlier proposed LDPC convolutional code constructions can be presented within this framework. Some of the constructed convolutional codes significantly outperform the underlying LDPC block codes. We investigate some possible reasons for this "convolutional gain," and we also discuss the --- mostly moderate --- decoder cost increase that is incurred by going from LDPC block to LDPC convolutional codes.Comment: Submitted to IEEE Transactions on Information Theory, April 2010; revised August 2010, revised November 2010 (essentially final version). (Besides many small changes, the first and second revised versions contain corrected entries in Tables I and II.

    On the Block Error Rate Performance of Spatially Coupled LDPC Codes for Streaming Applications

    Get PDF
    In this paper, we study the block error rate (BLER) performance of spatially coupled low-density parity-check (SC- LDPC) codes using a sliding window decoder suited for streaming applications. Previous studies of SC-LDPC have focused on the bit error rate (BER) performance or the frame error rate (FER) performance over the entire length of the code. Here, we consider protograph-based constructions of SC-LDPC codes in which a window decoder continuously outputs blocks in a streaming fashion, and we examine the BLER associated with these blocks.We begin by examining the effect of protograph design on the streaming BLER by varying the block size and the coupling width in such a way that the overall constraint length of the SC-LDPC code remains constant. Next, we investigate the BLER scaling behavior with block size and coupling width. Lastly, we consider the effect of employing an outer code to protect blocks, so that small numbers of residual errors can be corrected by the outer code. Simulation results for the additive white Gaussian noise channel (AWGNC) are included and comparisons are made to LDPC block codes (LDPC-BCs)

    Randomly Punctured Spatially Coupled LDPC Codes

    Get PDF
    In this paper, we study random puncturing of protograph-based spatially coupled low-density parity-check (SC- LDPC) code ensembles. We show that, with respect to iterative decoding threshold, the strength and suitability of an LDPC code ensemble for random puncturing over the binary erasure channel (BEC) is completely determined by a single constant that depends only on the rate and iterative decoding threshold of the mother code ensemble. We then use this analysis to show that randomly punctured SC-LDPC code ensembles display near capacity thresholds for a wide range of rates. We also perform an asymptotic minimum distance analysis and show that, like the SC-LDPC mother code ensemble, the punctured SC-LDPC code ensembles are also asymptotically good. Finally, we present some simulation results that confirm the excellent decoding performance promised by the asymptotic results

    Randomly Punctured LDPC Codes

    Get PDF
    In this paper, we present a random puncturing analysis of low-density parity-check (LDPC) code ensembles. We derive a simple analytic expression for the iterative belief propagation (BP) decoding threshold of a randomly punctured LDPC code ensemble on the binary erasure channel (BEC) and show that, with respect to the BP threshold, the strength and suitability of an LDPC code ensemble for random puncturing is completely determined by a single constant that depends only on the rate and the BP threshold of the mother code ensemble. We then provide an efficient way to accurately predict BP thresholds of randomly punctured LDPC code ensembles on the binary- input additive white Gaussian noise channel (BI-AWGNC), given only the BP threshold of the mother code ensemble on the BEC and the design rate, and we show how the prediction can be improved with knowledge of the BI-AWGNC threshold. We also perform an asymptotic minimum distance analysis of randomly punctured code ensembles and present simulation results that confirm the robust decoding performance promised by the asymptotic results. Protograph-based LDPC block code and spatially coupled LDPC code ensembles are used throughout as examples to demonstrate the results

    Reduced Complexity Window Decoding Schedules for Coupled LDPC Codes

    Get PDF
    Window decoding schedules are very attractive for message passing decoding of spatially coupled LDPC codes. They take advantage of the inherent convolutional code structure and allow continuous transmission with low decoding latency and complexity. In this paper we show that the decoding complexity can be further reduced if suitable message passing schedules are applied within the decoding window. An improvement based schedule is presented that easily adapts to different ensemble structures, window sizes, and channel parameters. Its combination with a serial (on-demand) schedule is also considered. Results from a computer search based schedule are shown for comparison

    Non-Uniform Window Decoding Schedules for Spatially Coupled LDPC Codes

    Get PDF
    Spatially coupled low-density parity-check codes can be decoded using a graph-based message passing algorithm applied across the total length of the coupled graph. However, considering practical constraints on decoding latency and complexity, a sliding window decoding approach is normally preferred. In order to reduce decoding complexity compared with standard parallel decoding schedules, serial schedules can be applied within a decoding window. However, uniform serial schedules within a window do not provide the expected reduction in complexity. Hence, we propose non-uniform schedules (parallel and serial) based on measured improvements in the estimated bit error rate (BER). We show that these non-uniform schedules result in a significant reduction in complexity without any loss in performance. Furthermore, based on observations made using density evolution, we propose a non-uniform pragmatic decoding schedule (parallel and serial) that does not require any additional calculations (e.g., BER estimates) within the decoding process
    corecore