6 research outputs found

    Novel Code-Construction for (3, k) Regular Low Density Parity Check Codes

    Get PDF
    Communication system links that do not have the ability to retransmit generally rely on forward error correction (FEC) techniques that make use of error correcting codes (ECC) to detect and correct errors caused by the noise in the channel. There are several ECC’s in the literature that are used for the purpose. Among them, the low density parity check (LDPC) codes have become quite popular owing to the fact that they exhibit performance that is closest to the Shannon’s limit. This thesis proposes a novel code-construction method for constructing not only (3, k) regular but also irregular LDPC codes. The choice of designing (3, k) regular LDPC codes is made because it has low decoding complexity and has a Hamming distance, at least, 4. In this work, the proposed code-construction consists of information submatrix (Hinf) and an almost lower triangular parity sub-matrix (Hpar). The core design of the proposed code-construction utilizes expanded deterministic base matrices in three stages. Deterministic base matrix of parity part starts with triple diagonal matrix while deterministic base matrix of information part utilizes matrix having all elements of ones. The proposed matrix H is designed to generate various code rates (R) by maintaining the number of rows in matrix H while only changing the number of columns in matrix Hinf. All the codes designed and presented in this thesis are having no rank-deficiency, no pre-processing step of encoding, no singular nature in parity part (Hpar), no girth of 4-cycles and low encoding complexity of the order of (N + g2) where g2«N. The proposed (3, k) regular codes are shown to achieve code performance below 1.44 dB from Shannon limit at bit error rate (BER) of 10 −6 when the code rate greater than R = 0.875. They have comparable BER and block error rate (BLER) performance with other techniques such as (3, k) regular quasi-cyclic (QC) and (3, k) regular random LDPC codes when code rates are at least R = 0.7. In addition, it is also shown that the proposed (3, 42) regular LDPC code performs as close as 0.97 dB from Shannon limit at BER 10 −6 with encoding complexity (1.0225 N), for R = 0.928 and N = 14364 – a result that no other published techniques can reach

    Sparse graph codes for compression, sensing, and secrecy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D

    Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography

    Get PDF
    The growth of data-driven technologies, 5G, and the Internet place enormous pressure on underlying information infrastructure. There exist numerous proposals on how to deal with the possible capacity crunch. However, the security of both optical and wireless networks lags behind reliable and spectrally efficient transmission. Significant achievements have been made recently in the quantum computing arena. Because most conventional cryptography systems rely on computational security, which guarantees the security against an efficient eavesdropper for a limited time, with the advancement in quantum computing this security can be compromised. To solve these problems, various schemes providing perfect/unconditional security have been proposed including physical-layer security (PLS), quantum key distribution (QKD), and post-quantum cryptography. Unfortunately, it is still not clear how to integrate those different proposals with higher level cryptography schemes. So the purpose of the Special Issue entitled “Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography” was to integrate these various approaches and enable the next generation of cryptography systems whose security cannot be broken by quantum computers. This book represents the reprint of the papers accepted for publication in the Special Issue
    corecore