1,052 research outputs found
Simplified decoding techniques for linear block codes
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications
Efficient Maximum-Likelihood Decoding of Linear Block Codes on Binary Memoryless Channels
In this work, we consider efficient maximum-likelihood decoding of linear
block codes for small-to-moderate block lengths. The presented approach is a
branch-and-bound algorithm using the cutting-plane approach of Zhang and Siegel
(IEEE Trans. Inf. Theory, 2012) for obtaining lower bounds. We have compared
our proposed algorithm to the state-of-the-art commercial integer program
solver CPLEX, and for all considered codes our approach is faster for both low
and high signal-to-noise ratios. For instance, for the benchmark (155,64)
Tanner code our algorithm is more than 11 times as fast as CPLEX for an SNR of
1.0 dB on the additive white Gaussian noise channel. By a small modification,
our algorithm can be used to calculate the minimum distance, which we have
again verified to be much faster than using the CPLEX solver.Comment: Submitted to 2014 International Symposium on Information Theory. 5
Pages. Accepte
A Comparison Study of LDPC and BCH Codes
The need for efficient and reliable digital data communication systems has been rising
rapidly in recent years. There are various reasons that have brought this need for the
communication systems, among them are the increase in automatic data processing
equipment and the increased need for long range communication. Therefore, the
LDPC and BCH codes were developed for achieving more reliable data transmission
in communication systems. This project covers the research about the LDPC and
BCH error correction codes. Algorithm for simulating both the LDPC and BCH
codes were also being investigated, which includes generating the parity check
matrix, generating the message code in Galois array matrix, encoding the message
bits, modulation and decoding the message bits for LDPC. Matlab software is used
for encoding and decoding the codes. The percentage of accuracy for LDPC
simulation codes are ranging from 95% to 99%. The results obtained shows that the
LDPC codes are more efficient and reliable than the BCH codes coding method of
error correction because the LDPC codes had a channel performance very close to the
Shannon limit. LDPC codes are a class of linear block codes that are proving to be
the best performing forward error correction available. Markets such as broadband
wireless and mobile networks operate in noisy environments and need powerful error
correction in order to improve reliability and better data rates. Through LDPC and
BCH codes, these systems can operate more reliably, efficiently and at higher data
rates
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS
3027, pp. 523-540. Differences from version 3: minor edits for grammar,
clarity, and typo
- …