8,859 research outputs found
Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes
We introduce the notion of the stopping redundancy hierarchy of a linear
block code as a measure of the trade-off between performance and complexity of
iterative decoding for the binary erasure channel. We derive lower and upper
bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and
Bonferroni-type inequalities, and specialize them for codes with cyclic
parity-check matrices. Based on the observed properties of parity-check
matrices with good stopping redundancy characteristics, we develop a novel
decoding technique, termed automorphism group decoding, that combines iterative
message passing and permutation decoding. We also present bounds on the
smallest number of permutations of an automorphism group decoder needed to
correct any set of erasures up to a prescribed size. Simulation results
demonstrate that for a large number of algebraic codes, the performance of the
new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on
Information Theor
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
Deterministic constructions of measurement matrices in compressed sensing
(CS) are considered in this paper. The constructions are inspired by the recent
discovery of Dimakis, Smarandache and Vontobel which says that parity-check
matrices of good low-density parity-check (LDPC) codes can be used as
{provably} good measurement matrices for compressed sensing under
-minimization. The performance of the proposed binary measurement
matrices is mainly theoretically analyzed with the help of the analyzing
methods and results from (finite geometry) LDPC codes. Particularly, several
lower bounds of the spark (i.e., the smallest number of columns that are
linearly dependent, which totally characterizes the recovery performance of
-minimization) of general binary matrices and finite geometry matrices
are obtained and they improve the previously known results in most cases.
Simulation results show that the proposed matrices perform comparably to,
sometimes even better than, the corresponding Gaussian random matrices.
Moreover, the proposed matrices are sparse, binary, and most of them have
cyclic or quasi-cyclic structure, which will make the hardware realization
convenient and easy.Comment: 12 pages, 11 figure
Decoding Cyclic Codes up to a New Bound on the Minimum Distance
A new lower bound on the minimum distance of q-ary cyclic codes is proposed.
This bound improves upon the Bose-Chaudhuri-Hocquenghem (BCH) bound and, for
some codes, upon the Hartmann-Tzeng (HT) bound. Several Boston bounds are
special cases of our bound. For some classes of codes the bound on the minimum
distance is refined. Furthermore, a quadratic-time decoding algorithm up to
this new bound is developed. The determination of the error locations is based
on the Euclidean Algorithm and a modified Chien search. The error evaluation is
done by solving a generalization of Forney's formula
Characterization and Efficient Search of Non-Elementary Trapping Sets of LDPC Codes with Applications to Stopping Sets
In this paper, we propose a characterization for non-elementary trapping sets
(NETSs) of low-density parity-check (LDPC) codes. The characterization is based
on viewing a NETS as a hierarchy of embedded graphs starting from an ETS. The
characterization corresponds to an efficient search algorithm that under
certain conditions is exhaustive. As an application of the proposed
characterization/search, we obtain lower and upper bounds on the stopping
distance of LDPC codes.
We examine a large number of regular and irregular LDPC codes, and
demonstrate the efficiency and versatility of our technique in finding lower
and upper bounds on, and in many cases the exact value of, . Finding
, or establishing search-based lower or upper bounds, for many of the
examined codes are out of the reach of any existing algorithm
Universal lossless source coding with the Burrows Wheeler transform
The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n → ∞, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory source
- …