396 research outputs found

    Improved Nearly-MDS Expander Codes

    Full text link
    A construction of expander codes is presented with the following three properties: (i) the codes lie close to the Singleton bound, (ii) they can be encoded in time complexity that is linear in their code length, and (iii) they have a linear-time bounded-distance decoder. By using a version of the decoder that corrects also erasures, the codes can replace MDS outer codes in concatenated constructions, thus resulting in linear-time encodable and decodable codes that approach the Zyablov bound or the capacity of memoryless channels. The presented construction improves on an earlier result by Guruswami and Indyk in that any rate and relative minimum distance that lies below the Singleton bound is attainable for a significantly smaller alphabet size.Comment: Part of this work was presented at the 2004 IEEE Int'l Symposium on Information Theory (ISIT'2004), Chicago, Illinois (June 2004). This work was submitted to IEEE Transactions on Information Theory on January 21, 2005. To appear in IEEE Transactions on Information Theory, August 2006. 12 page

    Using LDGM Codes and Sparse Syndromes to Achieve Digital Signatures

    Full text link
    In this paper, we address the problem of achieving efficient code-based digital signatures with small public keys. The solution we propose exploits sparse syndromes and randomly designed low-density generator matrix codes. Based on our evaluations, the proposed scheme is able to outperform existing solutions, permitting to achieve considerable security levels with very small public keys.Comment: 16 pages. The final publication is available at springerlink.co

    Channel coding for network communication: an information theoretic perspective

    Get PDF
    2011 Fall.Includes bibliographical references.Channel coding helps a communication system to combat noise and interference by adding "redundancy" to the source message. Theoretical fundamentals of channel coding in point-to-point systems have been intensively studied in the research area of information theory, which was proposed by Claude Shannon in his celebrated work in 1948. A set of landmark results have been developed to characterize the performance limitations in terms of the rate and the reliability tradeoff bounds. However, unlike its success in point-to-point systems, information theory has not yielded as rich results in network communication, which has been a key research focus over the past two decades. Due to the limitations posed by some of the key assumptions in classical information theory, network information theory is far from being mature and complete. For example, the classical information theoretic model assumes that communication parameters such as the information rate should be jointly determined by all transmitters and receivers. Communication should be carried out continuously over a long time such that the overhead of communication coordination becomes negligible. The communication channel should be stationary in order for the coding scheme to transform the channel noise randomness into deterministic statistics. These assumptions are valid in a point-to-point system, but they do not permit an extensive application of channel coding in network systems because they have essentially ignored the dynamic nature of network communication. Network systems deal with bursty message transmissions between highly dynamic users. For various reasons, joint determination of key communication parameters before message transmission is often infeasible or expensive. Communication channels can often be non-stationary due to the dynamic communication interference generated by the network users. The objective of this work is to extend information theory toward network communication scenarios. We develop new channel coding results, in terms of the communication rate and error performance tradeoff, for several non-classical communication models, in which key assumptions made in classical channel coding are dropped or revised

    Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations

    Full text link
    This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. These bounds are applied to various ensembles of turbo-like codes, focusing especially on repeat-accumulate codes and their recent variations which possess low encoding and decoding complexity and exhibit remarkable performance under iterative decoding. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of their optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is explored in the case of an arbitrary number of independent parallel channels. The generalization of the DS2 bound for parallel channels enables to re-derive specific bounds which were originally derived by Liu et al. as special cases of the Gallager bound. In the asymptotic case where we let the block length tend to infinity, the new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. The tightness of the new bounds for independent parallel channels is exemplified for structured ensembles of turbo-like codes. The improved bounds with their optimized tilting measures show, irrespectively of the block length of the codes, an improvement over the union bound and other previously reported bounds for independent parallel channels; this improvement is especially pronounced for moderate to large block lengths.Comment: Submitted to IEEE Trans. on Information Theory, June 2006 (57 pages, 9 figures
    corecore