1,443 research outputs found

    Error-resilient performance of Dirac video codec over packet-erasure channel

    Get PDF
    Video transmission over the wireless or wired network requires error-resilient mechanism since compressed video bitstreams are sensitive to transmission errors because of the use of predictive coding and variable length coding. This paper investigates the performance of a simple and low complexity error-resilient coding scheme which combines source and channel coding to protect compressed bitstream of wavelet-based Dirac video codec in the packet-erasure channel. By partitioning the wavelet transform coefficients of the motion-compensated residual frame into groups and independently processing each group using arithmetic and Forward Error Correction (FEC) coding, Dirac could achieves the robustness to transmission errors by giving the video quality which is gracefully decreasing over a range of packet loss rates up to 30% when compared with conventional FEC only methods. Simulation results also show that the proposed scheme using multiple partitions can achieve up to 10 dB PSNR gain over its existing un-partitioned format. This paper also investigates the error-resilient performance of the proposed scheme in comparison with H.264 over packet-erasure channel

    Enabling error-resilient internet broadcasting using motion compensated spatial partitioning and packet FEC for the dirac video codec

    Get PDF
    Video transmission over the wireless or wired network require protection from channel errors since compressed video bitstreams are very sensitive to transmission errors because of the use of predictive coding and variable length coding. In this paper, a simple, low complexity and patent free error-resilient coding is proposed. It is based upon the idea of using spatial partitioning on the motion compensated residual frame without employing the transform coefficient coding. The proposed scheme is intended for open source Dirac video codec in order to enable the codec to be used for Internet broadcasting. By partitioning the wavelet transform coefficients of the motion compensated residual frame into groups and independently processing each group using arithmetic coding and Forward Error Correction (FEC), robustness to transmission errors over the packet erasure wired network could be achieved. Using the Rate Compatibles Punctured Code (RCPC) and Turbo Code (TC) as the FEC, the proposed technique provides gracefully decreasing perceptual quality over packet loss rates up to 30%. The PSNR performance is much better when compared with the conventional data partitioning only methods. Simulation results show that the use of multiple partitioning of wavelet coefficient in Dirac can achieve up to 8 dB PSNR gain over its existing un-partitioned method

    Robust image and video coding with pyramid vector quantisation

    Get PDF

    Vector Quantization Video Encoder Using Hierarchical Cache Memory Scheme

    Get PDF
    A system compresses image blocks via successive hierarchical stages and motion encoders which employ caches updated by stack replacement algorithms. Initially, a background detector compares the present image block with a corresponding previously encoded image block and if similar, the background detector terminates the encoding procedure by setting a flag bit. Otherwise, the image block is decomposed into smaller present image subblocks. The smaller present image subblocks are each compared with a corresponding previously encoded image subblock of comparable size within the present image block. When a present image subblock is similar to a corresponding previously encoded image subblock, then the procedure is terminated by setting a flag bit. Alternatively, the present image subblock is forwarded to a motion encoder where it is compared with displaced image subblocks, which are formed by displacing previously encoded image subblocks by motion vectors that are stored in a cache, to derive a first distortion vector. When the first distortion vector is below a first threshold TM, the procedure is terminated and the present image subblock is encoded by setting flag bit and a cache index corresponding to the first distortion vector. Alternatively, the present image subblock is passed to a block matching encoder where it is compared with other previously encoded image subblocks to derive a second distortion vector. When the second distortion vector is below a second threshold Tm, the procedure is terminated by setting a flag bit, by generating the second distortion vector, and by updating the cache.Georgia Tech Research Corporatio

    Regularity scalable image coding based on wavelet singularity detection

    Get PDF
    In this paper, we propose an adaptive algorithm for scalable wavelet image coding, which is based on the general feature, the regularity, of images. In pattern recognition or computer vision, regularity of images is estimated from the oriented wavelet coefficients and quantified by the Lipschitz exponents. To estimate the Lipschitz exponents, evaluating the interscale evolution of the wavelet transform modulus sum (WTMS) over the directional cone of influence was proven to be a better approach than tracing the wavelet transform modulus maxima (WTMM). This is because the irregular sampling nature of the WTMM complicates the reconstruction process. Moreover, examples were found to show that the WTMM representation cannot uniquely characterize a signal. It implies that the reconstruction of signal from its WTMM may not be consistently stable. Furthermore, the WTMM approach requires much more computational effort. Therefore, we use the WTMS approach to estimate the regularity of images from the separable wavelet transformed coefficients. Since we do not concern about the localization issue, we allow the decimation to occur when we evaluate the interscale evolution. After the regularity is estimated, this information is utilized in our proposed adaptive regularity scalable wavelet image coding algorithm. This algorithm can be simply embedded into any wavelet image coders, so it is compatible with the existing scalable coding techniques, such as the resolution scalable and signal-to-noise ratio (SNR) scalable coding techniques, without changing the bitstream format, but provides more scalable levels with higher peak signal-to-noise ratios (PSNRs) and lower bit rates. In comparison to the other feature-based wavelet scalable coding algorithms, the proposed algorithm outperforms them in terms of visual perception, computational complexity and coding efficienc

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000
    corecore