research

Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations

Abstract

This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. These bounds are applied to various ensembles of turbo-like codes, focusing especially on repeat-accumulate codes and their recent variations which possess low encoding and decoding complexity and exhibit remarkable performance under iterative decoding. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of their optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is explored in the case of an arbitrary number of independent parallel channels. The generalization of the DS2 bound for parallel channels enables to re-derive specific bounds which were originally derived by Liu et al. as special cases of the Gallager bound. In the asymptotic case where we let the block length tend to infinity, the new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. The tightness of the new bounds for independent parallel channels is exemplified for structured ensembles of turbo-like codes. The improved bounds with their optimized tilting measures show, irrespectively of the block length of the codes, an improvement over the union bound and other previously reported bounds for independent parallel channels; this improvement is especially pronounced for moderate to large block lengths.Comment: Submitted to IEEE Trans. on Information Theory, June 2006 (57 pages, 9 figures

    Similar works

    Full text

    thumbnail-image

    Available Versions