129,874 research outputs found

    Multiple description video coding for stereoscopic 3D

    Get PDF
    In this paper, we propose an MDC schemes for stereoscopic 3D video. In the literature, MDC has previously been applied in 2D video but not so much in 3D video. The proposed algorithm enhances the error resilience of the 3D video using the combination of even and odd frame based MDC while retaining good temporal prediction efficiency for video over error-prone networks. Improvements are made to the original even and odd frame MDC scheme by adding a controllable amount of side information to improve frame interpolation at the decoder. The side information is also sent according to the video sequence motion for further improvement. The performance of the proposed algorithms is evaluated in error free and error prone environments especially for wireless channels. Simulation results show improved performance using the proposed MDC at high error rates compared to the single description coding (SDC) and the original even and odd frame MDC

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    S frame design for multiple description video coding

    Get PDF

    Multiple description video coding based on zero padding

    Get PDF

    Region-adaptive probability model selection for the arithmetic coding of video texture

    Get PDF
    In video coding systems using adaptive arithmetic coding to compress texture information, the employed symbol probability models need to be retrained every time the coding process moves into an area with different texture. To avoid this inefficiency, we propose to replace the probability models used in the original coder with multiple switchable sets of probability models. We determine the model set to use in each spatial region in an optimal manner, taking into account the additional signaling overhead. Experimental results show that this approach, when applied to H. 264/AVC's context-based adaptive binary arithmetic coder (CABAC), yields significant bit-rate savings, which are comparable to or higher than those obtained using alternative improvements to CABAC previously proposed in the literature

    Graded quantization for multiple description coding of compressive measurements

    Get PDF
    Compressed sensing (CS) is an emerging paradigm for acquisition of compressed representations of a sparse signal. Its low complexity is appealing for resource-constrained scenarios like sensor networks. However, such scenarios are often coupled with unreliable communication channels and providing robust transmission of the acquired data to a receiver is an issue. Multiple description coding (MDC) effectively combats channel losses for systems without feedback, thus raising the interest in developing MDC methods explicitly designed for the CS framework, and exploiting its properties. We propose a method called Graded Quantization (CS-GQ) that leverages the democratic property of compressive measurements to effectively implement MDC, and we provide methods to optimize its performance. A novel decoding algorithm based on the alternating directions method of multipliers is derived to reconstruct signals from a limited number of received descriptions. Simulations are performed to assess the performance of CS-GQ against other methods in presence of packet losses. The proposed method is successful at providing robust coding of CS measurements and outperforms other schemes for the considered test metrics

    Zerotree design for image compression: toward weighted universal zerotree coding

    Get PDF
    We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree

    Complexity Analysis Of Next-Generation VVC Encoding and Decoding

    Full text link
    While the next generation video compression standard, Versatile Video Coding (VVC), provides a superior compression efficiency, its computational complexity dramatically increases. This paper thoroughly analyzes this complexity for both encoder and decoder of VVC Test Model 6, by quantifying the complexity break-down for each coding tool and measuring the complexity and memory requirements for VVC encoding/decoding. These extensive analyses are performed for six video sequences of 720p, 1080p, and 2160p, under Low-Delay (LD), Random-Access (RA), and All-Intra (AI) conditions (a total of 320 encoding/decoding). Results indicate that the VVC encoder and decoder are 5x and 1.5x more complex compared to HEVC in LD, and 31x and 1.8x in AI, respectively. Detailed analysis of coding tools reveals that in LD on average, motion estimation tools with 53%, transformation and quantization with 22%, and entropy coding with 7% dominate the encoding complexity. In decoding, loop filters with 30%, motion compensation with 20%, and entropy decoding with 16%, are the most complex modules. Moreover, the required memory bandwidth for VVC encoding/decoding are measured through memory profiling, which are 30x and 3x of HEVC. The reported results and insights are a guide for future research and implementations of energy-efficient VVC encoder/decoder.Comment: IEEE ICIP 202
    corecore