4,394 research outputs found

    Iterative source and channel decoding relying on correlation modelling for wireless video transmission

    No full text
    Since joint source-channel decoding (JSCD) is capable of exploiting the residual redundancy in the source signals for improving the attainable error resilience, it has attracted substantial attention. Motivated by the principle of exploiting the source redundancy at the receiver, in this treatise we study the application of iterative source channel decoding (ISCD) aided video communications, where the video signal is modelled by a first-order Markov process. Firstly, we derive reduced-complexity formulas for the first-order Markov modelling (FOMM) aided source decoding. Then we propose a bit-based iterative horizontal vertical scanline model (IHVSM) aided source decoding algorithm, where a horizontal and a vertical source decoder are employed for exchanging their extrinsic information using the iterative decoding philosophy. The iterative IHVSM aided decoder is then employed in a forward error correction (FEC) encoded uncompressed video transmission scenario, where the IHVSM and the FEC decoder exchange softbit-information for performing turbo-like ISCD for the sake of improving the reconstructed video quality. Finally, we benchmark the attainable system performance against a near-lossless H.264/AVC video communication system and the existing FOMM based softbit source decoding scheme, where The financial support of the RC-UK under the auspices of the India-UK Advanced Technology Centre (IU-ATC) and that of the EU under the CONCERTO project as well as that of the European Research Council’s Advanced Fellow Grant is gratefully acknowledged. The softbit decoding is performed by a one-dimensional Markov model aided decoder. Our simulation results show that Eb=N0 improvements in excess of 2.8 dB are attainable by the proposed technique in uncompressed video applications

    Flexible distribution of complexity by hybrid predictive-distributed video coding

    Get PDF
    There is currently limited flexibility for distributing complexity in a video coding system. While rate-distortion-complexity (RDC) optimization techniques have been proposed for conventional predictive video coding with encoder-side motion estimation, they fail to offer true flexible distribution of complexity between encoder and decoder since the encoder is assumed to have always more computational resources available than the decoder. On the other hand, distributed video coding solutions with decoder-side motion estimation have been proposed, but hardly any RDC optimized systems have been developed. To offer more flexibility for video applications involving multi-tasking or battery-constrained devices, in this paper, we propose a codec combining predictive video coding concepts and techniques from distributed video coding and show the flexibility of this method in distributing complexity. We propose several modes to code frames, and provide complexity analysis illustrating encoder and decoder computational complexity for each mode. Rate distortion results for each mode indicate that the coding efficiency is similar. We describe a method to choose which mode to use for coding each inter frame, taking into account encoder and decoder complexity constraints, and illustrate how complexity is distributed more flexibly

    Improved compression performance for distributed video coding

    Get PDF

    Practical Full Resolution Learned Lossless Image Compression

    Full text link
    We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000. At the core of our method is a fully parallelizable hierarchical probabilistic model for adaptive entropy coding which is optimized end-to-end for the compression task. In contrast to recent autoregressive discrete probabilistic models such as PixelCNN, our method i) models the image distribution jointly with learned auxiliary representations instead of exclusively modeling the image distribution in RGB space, and ii) only requires three forward-passes to predict all pixel probabilities instead of one for each pixel. As a result, L3C obtains over two orders of magnitude speedups when sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN). Furthermore, we find that learning the auxiliary representation is crucial and outperforms predefined auxiliary representations such as an RGB pyramid significantly.Comment: Updated preprocessing and Table 1, see A.1 in supplementary. Code and models: https://github.com/fab-jul/L3C-PyTorc
    • 

    corecore