161 research outputs found
Two-Bit Bit Flipping Decoding of LDPC Codes
In this paper, we propose a new class of bit flipping algorithms for
low-density parity-check (LDPC) codes over the binary symmetric channel (BSC).
Compared to the regular (parallel or serial) bit flipping algorithms, the
proposed algorithms employ one additional bit at a variable node to represent
its "strength." The introduction of this additional bit increases the
guaranteed error correction capability by a factor of at least 2. An additional
bit can also be employed at a check node to capture information which is
beneficial to decoding. A framework for failure analysis of the proposed
algorithms is described. These algorithms outperform the Gallager A/B algorithm
and the min-sum algorithm at much lower complexity. Concatenation of two-bit
bit flipping algorithms show a potential to approach the performance of belief
propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information
Theory 201
Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown Probabilities
In this paper we provide a method to obtain tight lower bounds on the minimum
redundancy achievable by a Huffman code when the probability distribution
underlying an alphabet is only partially known. In particular, we address the
case where the occurrence probabilities are unknown for some of the symbols in
an alphabet. Bounds can be obtained for alphabets of a given size, for
alphabets of up to a given size, and for alphabets of arbitrary size. The
method operates on a Computer Algebra System, yielding closed-form numbers for
all results. Finally, we show the potential of the proposed method to shed some
light on the structure of the minimum redundancy achievable by the Huffman
code
An overview of JPEG 2000
JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.
On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes
The relation between the girth and the guaranteed error correction capability
of -left regular LDPC codes when decoded using the bit flipping (serial
and parallel) algorithms is investigated. A lower bound on the size of variable
node sets which expand by a factor of at least is found based on
the Moore bound. An upper bound on the guaranteed error correction capability
is established by studying the sizes of smallest possible trapping sets. The
results are extended to generalized LDPC codes. It is shown that generalized
LDPC codes can correct a linear fraction of errors under the parallel bit
flipping algorithm when the underlying Tanner graph is a good expander. It is
also shown that the bound cannot be improved when is even by studying
a class of trapping sets. A lower bound on the size of variable node sets which
have the required expansion is established.Comment: 17 pages. Submitted to IEEE Transactions on Information Theory. Parts
of this work have been accepted for presentation at the International
Symposium on Information Theory (ISIT'08) and the International Telemetering
Conference (ITC'08
Error Correction Capability of Column-Weight-Three LDPC Codes: Part II
The relation between the girth and the error correction capability of
column-weight-three LDPC codes is investigated. Specifically, it is shown that
the Gallager A algorithm can correct errors in iterations on a
Tanner graph of girth .Comment: 7 pages, 7 figures, submitted to IEEE Transactions on Information
Theory (July 2008
Stationary probability model for microscopic parallelism in JPEG2000
Parallel processing is key to augmenting the throughput of image codecs. Despite numerous efforts to parallelize wavelet-based image coding systems, most attempts fail at the parallelization of the bitplane coding engine, which is the most computationally intensive stage of the coding pipeline. The main reason for this failure is the causality with which current coding strategies are devised, which assumes that one coefficient is coded after another. This work analyzes the mechanisms employed in bitplane coding and proposes alternatives to enhance opportunities for parallelism. We describe a stationary probability model that, without sacrificing the advantages of current approaches, removes the main obstacle to the parallelization of most coding strategies. Experimental tests evaluate the coding performance achieved by the proposed method in the framework of JPEG2000 when coding different types of images. Results indicate that the stationary probability model achieves similar coding performance, with slight increments or decrements depending on the image type and the desired level of parallelism
- …