371 research outputs found
Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes
We introduce the notion of the stopping redundancy hierarchy of a linear
block code as a measure of the trade-off between performance and complexity of
iterative decoding for the binary erasure channel. We derive lower and upper
bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and
Bonferroni-type inequalities, and specialize them for codes with cyclic
parity-check matrices. Based on the observed properties of parity-check
matrices with good stopping redundancy characteristics, we develop a novel
decoding technique, termed automorphism group decoding, that combines iterative
message passing and permutation decoding. We also present bounds on the
smallest number of permutations of an automorphism group decoder needed to
correct any set of erasures up to a prescribed size. Simulation results
demonstrate that for a large number of algebraic codes, the performance of the
new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on
Information Theor
Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for the Erasure Channel with Bounded Complexity
The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes
which asymptotically achieve capacity on the binary erasure channel (BEC) with
{\em bounded complexity}, per information bit, of encoding and decoding. It
also introduces symmetry properties which play a central role in the
construction of capacity-achieving ensembles for the BEC with bounded
complexity. The results here improve on the tradeoff between performance and
complexity provided by previous constructions of capacity-achieving ensembles
of codes defined on graphs. The superiority of ARA codes with moderate to large
block length is exemplified by computer simulations which compare their
performance with those of previously reported capacity-achieving ensembles of
LDPC and IRA codes. The ARA codes also have the advantage of being systematic.Comment: Submitted to IEEE Trans. on Information Theory, December 1st, 2005.
Includes 50 pages and 13 figure
Fingerprinting with Minimum Distance Decoding
This work adopts an information theoretic framework for the design of
collusion-resistant coding/decoding schemes for digital fingerprinting. More
specifically, the minimum distance decision rule is used to identify 1 out of t
pirates. Achievable rates, under this detection rule, are characterized in two
distinct scenarios. First, we consider the averaging attack where a random
coding argument is used to show that the rate 1/2 is achievable with t=2
pirates. Our study is then extended to the general case of arbitrary
highlighting the underlying complexity-performance tradeoff. Overall, these
results establish the significant performance gains offered by minimum distance
decoding as compared to other approaches based on orthogonal codes and
correlation detectors. In the second scenario, we characterize the achievable
rates, with minimum distance decoding, under any collusion attack that
satisfies the marking assumption. For t=2 pirates, we show that the rate
is achievable using an ensemble of random linear
codes. For , the existence of a non-resolvable collusion attack, with
minimum distance decoding, for any non-zero rate is established. Inspired by
our theoretical analysis, we then construct coding/decoding schemes for
fingerprinting based on the celebrated Belief-Propagation framework. Using an
explicit repeat-accumulate code, we obtain a vanishingly small probability of
misidentification at rate 1/3 under averaging attack with t=2. For collusion
attacks which satisfy the marking assumption, we use a more sophisticated
accumulate repeat accumulate code to obtain a vanishingly small
misidentification probability at rate 1/9 with t=2. These results represent a
marked improvement over the best available designs in the literature.Comment: 26 pages, 6 figures, submitted to IEEE Transactions on Information
Forensics and Securit
Bounds on the Error Probability of Raptor Codes under Maximum Likelihood Decoding
In this paper upper and lower bounds on the probability of decoding failure
under maximum likelihood decoding are derived for different (nonbinary) Raptor
code constructions. In particular four different constructions are considered;
(i) the standard Raptor code construction, (ii) a multi-edge type construction,
(iii) a construction where the Raptor code is nonbinary but the generator
matrix of the LT code has only binary entries, (iv) a combination of (ii) and
(iii). The latter construction resembles the one employed by RaptorQ codes,
which at the time of writing this article represents the state of the art in
fountain codes. The bounds are shown to be tight, and provide an important aid
for the design of Raptor codes.Comment: Submitted for revie
Density Evolution for Asymmetric Memoryless Channels
Density evolution is one of the most powerful analytical tools for
low-density parity-check (LDPC) codes and graph codes with message passing
decoding algorithms. With channel symmetry as one of its fundamental
assumptions, density evolution (DE) has been widely and successfully applied to
different channels, including binary erasure channels, binary symmetric
channels, binary additive white Gaussian noise channels, etc. This paper
generalizes density evolution for non-symmetric memoryless channels, which in
turn broadens the applications to general memoryless channels, e.g. z-channels,
composite white Gaussian noise channels, etc. The central theorem underpinning
this generalization is the convergence to perfect projection for any fixed size
supporting tree. A new iterative formula of the same complexity is then
presented and the necessary theorems for the performance concentration theorems
are developed. Several properties of the new density evolution method are
explored, including stability results for general asymmetric memoryless
channels. Simulations, code optimizations, and possible new applications
suggested by this new density evolution method are also provided. This result
is also used to prove the typicality of linear LDPC codes among the coset code
ensemble when the minimum check node degree is sufficiently large. It is shown
that the convergence to perfect projection is essential to the belief
propagation algorithm even when only symmetric channels are considered. Hence
the proof of the convergence to perfect projection serves also as a completion
of the theory of classical density evolution for symmetric memoryless channels.Comment: To appear in the IEEE Transactions on Information Theor
- …