265 research outputs found

    Scalable Video Coding

    Get PDF
    International audienceWith the evolution of Internet to heterogeneous networks both in terms of processing power and network bandwidth, different users demand the different versions of the same content. This has given birth to the scalable era of video content where a single bitstream contains multiple versions of the same video content which can be different in terms of resolutions, frame rates or quality. Several early standards, like MPEG2 video, H.263, and MPEG4 part II already include tools to provide different modalities of scalability. However, the scalable profiles of these standards are seldom used. This is because the scalability comes with significant loss in coding efficiency and the Internet was at its early stage. Scalable extension of H.264/AVC is named scalable video coding and is published in July 2007. It has several new coding techniques developed and it reduces the gap of coding efficiency with state-of-the-art non-scalable codec while keeping a reasonable complexity increase. After an introduction to scalable video coding, we present a proposition regarding the scalable functionality of H.264/AVC, which is the improvement of the compression ratio in enhancement layers (ELs) of subband/wavelet based scalable bitstream. A new adaptive scanning methodology for intra frame scalable coding framework based on subband/wavelet coding approach is presented for H.264/AVC scalable video coding. It takes advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264/AVC, we can get better compression, without any compromise on PSNR

    Compressed-domain transcoding of H.264/AVC and SVC video streams

    Get PDF

    A fully scalable wavelet video coding scheme with homologous inter-scale prediction

    Get PDF
    In this paper, we present a fully scalable wavelet-based video coding architecture called STP-Tool, in which motion-compensated temporal-filtered subbands of spatially scaled versions of a video sequence can be used as a base layer for inter-scale predictions. These predictions take place in a pyramidal closed-loop structure between homologous resolution data, i.e., without the need of spatial interpolation. The presented implementation of the STP-Tool architecture is based on the reference software of the Wavelet Video Coding MPEG Ad-Hoc Group. The STP-Tool architecture makes it possible to compensate for some of the typical drawbacks of current wavelet-based scalable video coding architectures and shows interesting objective and visual results even when compared with other wavelet-based or MPEG-4 AVC/H.264-based scalable video coding systems

    Spatially Scalable Video Coding (SSVC) Using Motion Compensated Recursive Temporal Filtering (MCRTF)

    Get PDF
    Through the following years, streaming makers will be progressively tasked supplying enhanced streams of video to gadgets as mobile phones and set top boxes, alongside diverse quality variants for clients to get content on general Internet. While there have been various ways to deal with this issue, including different bit rate feature, one exceptionally solid competitor will be a H.264 expansion called Scalable Video Coding ( SVC). It encodes video into "layers," beginning with the "base" layer, which contains the most minimal information of the bit-stream, and then moving towards “enhanced layers” which includes the information to scale up the output. Also SVC gives support for different resolutions inside a single compressed bit stream which is known as spatial scalabilility. In this thesis a problem on SSVC has been addressed. The video sequences had been made scalable in spatial domain. In order to make it more efficient for real time applications, motion compensated recursive temporal filtering (MCRTF) has been implemented. This scheme enhances the efficiency of the components of a visual signal. The temporal filter used here helps in reducing noisearising from the plurality of the frames and the improvised output with reduced noise is used in the process of predictive encoding. Also it eliminates the inherent drift, which arises due to difference between encoder and decoder. As visual signals are always subjected to temporal correlation, motion compensation from the adjacent frames and using it as the reference during the process of predictive coding is of prior importance. The conventional and the proposed method have been used during the encoding process of various video sequences in the spatial domain and an analytical study on that has been carried ou
    corecore