575 research outputs found

    Data compression using error correcting codes

    Get PDF
    Application of error correcting codes for data compression is first investigated by Shannon where he suggests that there is a duality between source coding and channel coding. This duality implies that good channel codes are likely to be good source codes (and vice versa). Recently the problem of source coding using channel codes is receiving increasing attention. The main application of this problem is when data are transmitted over noisy channels. Since standard data compression techniques are not designed for error correction, compressing data and transmitting over noisy channels may cause corruption of the whole compressed sequence. However, instead of employing standard compression techniques, like Huffman coding, one may compress data using error correcting codes that are suitable for both data compression and error correction purposes. Recently, turbo codes, repeat-accumulate codes, low density parity check codes, and fountain codes have been used as lossless source codes and have achieved compression rates very close to the source entropy. When a near-lossless compression is desired, i.e. a small level of distortion is acceptable, the source encoder generates fixed-length codewords and the encoding complexity is low. Theoretically, random codes could achieve near-lossless compression. In literature, this has been proved by presenting a random binning scheme. Practically, all powerful channel codes, e.g. turbo codes, can follow the same procedure as suggested in random binning and achieve compression rates close to the entropy. On the other hand, if a completely lossless compression is required, i.e. if the distortion must be forced to zero, the source encoding is a complicated iterative procedure that generates variable-length codewords to guarantee zero distortion. However, the large complexity of encoding imposes a large delay to the system. The iterative encoding procedure can be regarded as using a nested code where each codeword of a higher-rate code is formed by adding parities to a codeword of some lower-rate code. This iterative encoding is proposed for practical codes, e.g. turbo codes and low density parity check (LDPC) codes, in the literature. In contrast to near-lossless source coding, in the lossless case no random coding theory is available to support achievability of entropy and specify distribution of the compression rate. We have two main contributions in this thesis. Our first contribution is presenting a tree structured random binning scheme to prove that nested random codes asymptotically achieve the entropy. We derive the probability mass function of the compression rate and show how it varies when increasing the block length. We also consider a more practical tree structured random binning scheme, where parities are generated independently and randomly, but they are biased. Our second contribution is to decrease the delay in turbo source coding. We consider turbo codes for data compression and observe that existing schemes achieve low compression rates; but because of large block length and large number of iterations they impose a large delay to the system. To decrease this delay we look at the problem of source coding using short block length turbo codes. We show how to modify different components of the encoder to achieve low compression rates. Specifically we modify the parity interleaver and use rectangular puncturing arrays. We also replace a single turbo code by a library of turbo codes to further decrease the compression rate. Since the scheme is variable-length and also many codes are used, the codeword length along with the code index (index of the turbo code which is used for compression) are transmitted as an overhead. Transmission of this overhead increases the compression rate. We propose a detection method to detect this overhead from the codeword. Therefore, the overhead is no longer transmitted since it is detected from the codeword at the decoder. This detection method will reduce the compression rate for short block length systems but it becomes less attractive for large block length codes

    Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    Get PDF
    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency

    Constructing Linear Encoders with Good Spectra

    Full text link
    Linear encoders with good joint spectra are suitable candidates for optimal lossless joint source-channel coding (JSCC), where the joint spectrum is a variant of the input-output complete weight distribution and is considered good if it is close to the average joint spectrum of all linear encoders (of the same coding rate). In spite of their existence, little is known on how to construct such encoders in practice. This paper is devoted to their construction. In particular, two families of linear encoders are presented and proved to have good joint spectra. The first family is derived from Gabidulin codes, a class of maximum-rank-distance codes. The second family is constructed using a serial concatenation of an encoder of a low-density parity-check code (as outer encoder) with a low-density generator matrix encoder (as inner encoder). In addition, criteria for good linear encoders are defined for three coding applications: lossless source coding, channel coding, and lossless JSCC. In the framework of the code-spectrum approach, these three scenarios correspond to the problems of constructing linear encoders with good kernel spectra, good image spectra, and good joint spectra, respectively. Good joint spectra imply both good kernel spectra and good image spectra, and for every linear encoder having a good kernel (resp., image) spectrum, it is proved that there exists a linear encoder not only with the same kernel (resp., image) but also with a good joint spectrum. Thus a good joint spectrum is the most important feature of a linear encoder.Comment: v5.5.5, no. 201408271350, 40 pages, 3 figures, extended version of the paper to be published in IEEE Transactions on Information Theor

    Design techniques for graph-based error-correcting codes and their applications

    Get PDF
    In ShannonÂs seminal paper, ÂA Mathematical Theory of CommunicationÂ, he defined ÂChannel Capacity which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem. The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems. In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion. In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes

    Slepian-Wolf Coding for Broadcasting with Cooperative Base-Stations

    Full text link
    We propose a base-station (BS) cooperation model for broadcasting a discrete memoryless source in a cellular or heterogeneous network. The model allows the receivers to use helper BSs to improve network performance, and it permits the receivers to have prior side information about the source. We establish the model's information-theoretic limits in two operational modes: In Mode 1, the helper BSs are given information about the channel codeword transmitted by the main BS, and in Mode 2 they are provided correlated side information about the source. Optimal codes for Mode 1 use \emph{hash-and-forward coding} at the helper BSs; while, in Mode 2, optimal codes use source codes from Wyner's \emph{helper source-coding problem} at the helper BSs. We prove the optimality of both approaches by way of a new list-decoding generalisation of [8, Thm. 6], and, in doing so, show an operational duality between Modes 1 and 2.Comment: 16 pages, 1 figur

    Multi-Way Relay Networks: Orthogonal Uplink, Source-Channel Separation and Code Design

    Full text link
    We consider a multi-way relay network with an orthogonal uplink and correlated sources, and we characterise reliable communication (in the usual Shannon sense) with a single-letter expression. The characterisation is obtained using a joint source-channel random-coding argument, which is based on a combination of Wyner et al.'s "Cascaded Slepian-Wolf Source Coding" and Tuncel's "Slepian-Wolf Coding over Broadcast Channels". We prove a separation theorem for the special case of two nodes; that is, we show that a modular code architecture with separate source and channel coding functions is (asymptotically) optimal. Finally, we propose a practical coding scheme based on low-density parity-check codes, and we analyse its performance using multi-edge density evolution.Comment: Authors' final version (accepted and to appear in IEEE Transactions on Communications
    • …
    corecore