20 research outputs found
Low-Density Arrays of Circulant Matrices: Rank and Row-Redundancy Analysis, and Quasi-Cyclic LDPC Codes
This paper is concerned with general analysis on the rank and row-redundancy
of an array of circulants whose null space defines a QC-LDPC code. Based on the
Fourier transform and the properties of conjugacy classes and Hadamard products
of matrices, we derive tight upper bounds on rank and row-redundancy for
general array of circulants, which make it possible to consider row-redundancy
in constructions of QC-LDPC codes to achieve better performance. We further
investigate the rank of two types of construction of QC-LDPC codes:
constructions based on Vandermonde Matrices and Latin Squares and give
combinatorial expression of the exact rank in some specific cases, which
demonstrates the tightness of the bound we derive. Moreover, several types of
new construction of QC-LDPC codes with large row-redundancy are presented and
analyzed.Comment: arXiv admin note: text overlap with arXiv:1004.118
Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications
Coding; Communications; Engineering; Networks; Information Theory; Algorithm
A STUDY OF LINEAR ERROR CORRECTING CODES
Since Shannon's ground-breaking work in 1948, there have been two main development streams
of channel coding in approaching the limit of communication channels, namely classical coding
theory which aims at designing codes with large minimum Hamming distance and probabilistic
coding which places the emphasis on low complexity probabilistic decoding using long codes built
from simple constituent codes. This work presents some further investigations in these two channel
coding development streams.
Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse
parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary
LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents
and Mattson-Solomon polynomials, and are complementary to each other. The two methods
generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and
projective geometry codes. Their extension to non binary fields is shown to be straightforward.
These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative
decoding. It is also shown that for some of these codes, maximum likelihood performance may
be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords
of the dual code for each iteration.
Following a property of the revolving-door combination generator, multi-threaded minimum
Hamming distance computation algorithms are developed. Using these algorithms, the previously
unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated.
In addition, the highest minimum Hamming distance attainable by all binary cyclic codes
of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes
which have higher minimum Hamming distance than the previously considered best known linear
code have been found.
It is shown that by exploiting the structure of circulant matrices, the number of codewords
required, to compute the minimum Hamming distance and the number of codewords of a given
Hamming weight of binary double-circulant codes based on primes, may be reduced. A means
of independently verifying the exhaustively computed number of codewords of a given Hamming
weight of these double-circulant codes is developed and in coiyunction with this, it is proved that
some published results are incorrect and the correct weight spectra are presented. Moreover, it is
shown that it is possible to estimate the minimum Hamming distance of this family of prime-based
double-circulant codes.
It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch
algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection
mechanism that offers much better throughput and performance than the conventional ORG
scheme is described. Using the same method it is shown that the performance of conventional CRC
scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy
communications system and it is shown that sequences of good error correction codes,
suitable for use in incremental redundancy communications systems may be obtained using the
Constructions X and XX. Examples are given and their performances presented in comparison to
conventional CRC schemes
Area and energy efficient VLSI architectures for low-density parity-check decoders using an on-the-fly computation
The VLSI implementation complexity of a low density parity check (LDPC)
decoder is largely influenced by the interconnect and the storage requirements. This
dissertation presents the decoder architectures for regular and irregular LDPC codes that
provide substantial gains over existing academic and commercial implementations. Several
structured properties of LDPC codes and decoding algorithms are observed and are used to
construct hardware implementation with reduced processing complexity. The proposed
architectures utilize an on-the-fly computation paradigm which permits scheduling of the
computations in a way that the memory requirements and re-computations are reduced.
Using this paradigm, the run-time configurable and multi-rate VLSI architectures for the
rate compatible array LDPC codes and irregular block LDPC codes are designed. Rate
compatible array codes are considered for DSL applications. Irregular block LDPC codes
are proposed for IEEE 802.16e, IEEE 802.11n, and IEEE 802.20. When compared with a
recent implementation of an 802.11n LDPC decoder, the proposed decoder reduces the
logic complexity by 6.45x and memory complexity by 2x for a given data throughput.
When compared to the latest reported multi-rate decoders, this decoder design has an area efficiency of around 5.5x and energy efficiency of 2.6x for a given data throughput. The
numbers are normalized for a 180nm CMOS process.
Properly designed array codes have low error floors and meet the requirements of
magnetic channel and other applications which need several Gbps of data throughput. A
high throughput and fixed code architecture for array LDPC codes has been designed. No
modification to the code is performed as this can result in high error floors. This parallel
decoder architecture has no routing congestion and is scalable for longer block lengths.
When compared to the latest fixed code parallel decoders in the literature, this design has
an area efficiency of around 36x and an energy efficiency of 3x for a given data throughput.
Again, the numbers are normalized for a 180nm CMOS process. In summary, the design
and analysis details of the proposed architectures are described in this dissertation. The
results from the extensive simulation and VHDL verification on FPGA and ASIC design
platforms are also presented
CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS
Iterative decoding techniques shaked the waters of the error correction and communications
field in general. Their amazing compromise between complexity and performance
offered much more freedom in code design and made highly complex codes, that were
being considered undecodable until recently, part of almost any communication system.
Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has
attracted huge research interest. But the iterative decoder still hides many of its secrets,
as it has not been possible yet to fully describe its behaviour and its cost function.
This work presents the convergence problem of iterative decoding from various angles
and explores methods for reducing any sub-optimalities on its operation. The decoding
algorithms for both LDPC and turbo codes were investigated and aspects that contribute
to convergence problems were identified. A new algorithm was proposed, capable of providing
considerable coding gain in any iterative scheme. Moreover, it was shown that
for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and
perform maximum likelihood decoding. Its performance and efficiency was compared to
that of other convergence improvement schemes.
Various conditions that can be considered critical to the outcome of the iterative decoder
were also investigated and the decoding algorithm of LDPC codes was followed
analytically to verify the experimental results
Quantum stabilizer codes and beyond
The importance of quantum error correction in paving the way to build a
practical quantum computer is no longer in doubt. This dissertation makes a
threefold contribution to the mathematical theory of quantum error-correcting
codes. Firstly, it extends the framework of an important class of quantum codes
-- nonbinary stabilizer codes. It clarifies the connections of stabilizer codes
to classical codes over quadratic extension fields, provides many new
constructions of quantum codes, and develops further the theory of optimal
quantum codes and punctured quantum codes. Secondly, it contributes to the
theory of operator quantum error correcting codes also called as subsystem
codes. These codes are expected to have efficient error recovery schemes than
stabilizer codes. This dissertation develops a framework for study and analysis
of subsystem codes using character theoretic methods. In particular, this work
establishes a close link between subsystem codes and classical codes showing
that the subsystem codes can be constructed from arbitrary classical codes.
Thirdly, it seeks to exploit the knowledge of noise to design efficient quantum
codes and considers more realistic channels than the commonly studied
depolarizing channel. It gives systematic constructions of asymmetric quantum
stabilizer codes that exploit the asymmetry of errors in certain quantum
channels.Comment: Ph.D. Dissertation, Texas A&M University, 200