5 research outputs found

    Source and channel coding using Fountain codes

    Get PDF
    The invention of Fountain codes is a major advance in the field of error correcting codes. The goal of this work is to study and develop algorithms for source and channel coding using a family of Fountain codes known as Raptor codes. From an asymptotic point of view, the best currently known sum-product decoding algorithm for non binary alphabets has a high complexity that limits its use in practice. For binary channels, sum-product decoding algorithms have been extensively studied and are known to perform well. In the first part of this work, we develop a decoding algorithm for binary codes on non-binary channels based on a combination of sum-product and maximum-likelihood decoding. We apply this algorithm to Raptor codes on both symmetric and non-symmetric channels. Our algorithm shows the best performance in terms of complexity and error rate per symbol for blocks of finite length for symmetric channels. Then, we examine the performance of Raptor codes under sum-product decoding when the transmission is taking place on piecewise stationary memoryless channels and on channels with memory corrupted by noise. We develop algorithms for joint estimation and detection while simultaneously employing expectation maximization to estimate the noise, and sum-product algorithm to correct errors. We also develop a hard decision algorithm for Raptor codes on piecewise stationary memoryless channels. Finally, we generalize our joint LT estimation-decoding algorithms for Markov-modulated channels. In the third part of this work, we develop compression algorithms using Raptor codes. More specifically we introduce a lossless text compression algorithm, obtaining in this way competitive results compared to the existing classical approaches. Moreover, we propose distributed source coding algorithms based on the paradigm proposed by Slepian and Wolf

    Lossless data compression with polar codes

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 60-62.In this study, lossless polar compression schemes are proposed for finite source alphabets in the noiseless setting. In the first part, lossless polar source coding scheme for binary memoryless sources introduced by Arıkan is extended to general prime-size alphabets. In addition to the conventional successive cancellation decoding (SC-D), successive cancellation list decoding (SCL-D) is utilized for improved performance at practical block-lengths. For code construction, greedy approximation method for density evolution, proposed by Tal and Vardy, is adapted to non-binary alphabets. In the second part, a variable-length, zero-error polar compression scheme for prime-size alphabets based on the work of Cronie and Korada is developed. It is shown numerically that this scheme provides rates close to minimum source coding rate at practical block-lengths under SC-D, while achieving the minimum source coding rate asymptotically in the block-length. For improved performance at practical block-lengths, a scheme based on SCL-D is developed. The proposed schemes are generalized to arbitrary finite source alphabets by using a multi-level approach. For practical applications, robustness of the zero-error source coding scheme with respect to uncertainty in source distribution is investigated. Based on this robustness investigation, it is shown that a class of prebuilt information sets can be used at practical block-lengths instead of constructing a specific information set for every source distribution. Since the compression schemes proposed in this thesis are not universal, probability distribution of a source must be known at the receiver for reconstruction. In the presence of source uncertainty, this requires the transmitter to inform the receiver about the source distribution. As a solution to this problem, a sequential quantization with scaling algorithm is proposed to transmit the probability distribution of the source together with the compressed word in an efficient way.Çaycı, SemihM.S

    Network compression via network memory: fundamental performance limits

    Get PDF
    The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.Ph.D
    corecore