911 research outputs found

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Iterative source and channel decoding relying on correlation modelling for wireless video transmission

    No full text
    Since joint source-channel decoding (JSCD) is capable of exploiting the residual redundancy in the source signals for improving the attainable error resilience, it has attracted substantial attention. Motivated by the principle of exploiting the source redundancy at the receiver, in this treatise we study the application of iterative source channel decoding (ISCD) aided video communications, where the video signal is modelled by a first-order Markov process. Firstly, we derive reduced-complexity formulas for the first-order Markov modelling (FOMM) aided source decoding. Then we propose a bit-based iterative horizontal vertical scanline model (IHVSM) aided source decoding algorithm, where a horizontal and a vertical source decoder are employed for exchanging their extrinsic information using the iterative decoding philosophy. The iterative IHVSM aided decoder is then employed in a forward error correction (FEC) encoded uncompressed video transmission scenario, where the IHVSM and the FEC decoder exchange softbit-information for performing turbo-like ISCD for the sake of improving the reconstructed video quality. Finally, we benchmark the attainable system performance against a near-lossless H.264/AVC video communication system and the existing FOMM based softbit source decoding scheme, where The financial support of the RC-UK under the auspices of the India-UK Advanced Technology Centre (IU-ATC) and that of the EU under the CONCERTO project as well as that of the European Research Council’s Advanced Fellow Grant is gratefully acknowledged. The softbit decoding is performed by a one-dimensional Markov model aided decoder. Our simulation results show that Eb=N0 improvements in excess of 2.8 dB are attainable by the proposed technique in uncompressed video applications

    Piecewise mapping in HEVC lossless intra-prediction coding

    Get PDF
    The lossless intra-prediction coding modality of the High Efficiency Video Coding (HEVC) standard provides high coding performance while following frame-by-frame basis access to the coded data. This is of interest in many professional applications such as medical imaging, automotive vision and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using Differential Pulse Code Modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. Specifically, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4×4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain maximum bit-rate reductions of 5.54% and 28.33% for screen content material compared to DPCM-based and block-wise intra-prediction, respectively. Compared to IntraBlock Copy, piecewise mapping can attain maximum bit-rate reductions of 11.48% for camera-captured material

    Fast Random Access to Wavelet Compressed Volumetric Data Using Hashing

    Get PDF
    We present a new approach to lossy storage of the coefficients of wavelet transformed data. While it is common to store the coefficients of largest magnitude (and let all other coefficients be zero), we allow a slightly different set of coefficients to be stored. This brings into play a recently proposed hashing technique that allows space efficient storage and very efficient retrieval of coefficients. Our approach is applied to compression of volumetric data sets. For the ``Visible Man'' volume we obtain up to 80% improvement in compression ratio over previously suggested schemes. Further, the time for accessing a random voxel is quite competitive

    High-throughput variable-to-fixed entropy codec using selective, stochastic code forests

    Get PDF
    Efficient high-throughput (HT) compression algorithms are paramount to meet the stringent constraints of present and upcoming data storage, processing, and transmission systems. In particular, latency, bandwidth and energy requirements are critical for those systems. Most HT codecs are designed to maximize compression speed, and secondarily to minimize compressed lengths. On the other hand, decompression speed is often equally or more critical than compression speed, especially in scenarios where decompression is performed multiple times and/or at critical parts of a system. In this work, an algorithm to design variable-to-fixed (VF) codes is proposed that prioritizes decompression speed. Stationary Markov analysis is employed to generate multiple, jointly optimized codes (denoted code forests). Their average compression efficiency is on par with the state of the art in VF codes, e.g., within 1% of Yamamoto et al.\u27s algorithm. The proposed code forest structure enables the implementation of highly efficient codecs, with decompression speeds 3.8 times faster than other state-of-the-art HT entropy codecs with equal or better compression ratios for natural data sources. Compared to these HT codecs, the proposed forests yields similar compression efficiency and speeds
    • 

    corecore