265 research outputs found
Scalable Video Coding
International audienceWith the evolution of Internet to heterogeneous networks both in terms of processing power and network bandwidth, different users demand the different versions of the same content. This has given birth to the scalable era of video content where a single bitstream contains multiple versions of the same video content which can be different in terms of resolutions, frame rates or quality. Several early standards, like MPEG2 video, H.263, and MPEG4 part II already include tools to provide different modalities of scalability. However, the scalable profiles of these standards are seldom used. This is because the scalability comes with significant loss in coding efficiency and the Internet was at its early stage. Scalable extension of H.264/AVC is named scalable video coding and is published in July 2007. It has several new coding techniques developed and it reduces the gap of coding efficiency with state-of-the-art non-scalable codec while keeping a reasonable complexity increase. After an introduction to scalable video coding, we present a proposition regarding the scalable functionality of H.264/AVC, which is the improvement of the compression ratio in enhancement layers (ELs) of subband/wavelet based scalable bitstream. A new adaptive scanning methodology for intra frame scalable coding framework based on subband/wavelet coding approach is presented for H.264/AVC scalable video coding. It takes advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264/AVC, we can get better compression, without any compromise on PSNR
Combined Source and Channel Strategies for Optimized Video Communications
ISBN 978-953-7619-70-
A fully scalable wavelet video coding scheme with homologous inter-scale prediction
In this paper, we present a fully scalable wavelet-based video coding architecture called STP-Tool, in which motion-compensated temporal-filtered subbands of spatially scaled versions of a video sequence can be used as a base layer for inter-scale predictions. These predictions take place in a pyramidal closed-loop structure between homologous resolution data, i.e., without the need of spatial interpolation. The presented implementation of the STP-Tool architecture is based on the reference software of the Wavelet Video Coding MPEG Ad-Hoc Group. The STP-Tool architecture makes it possible to compensate for some of the typical drawbacks of current wavelet-based scalable video coding architectures and shows interesting objective and visual results even when compared with other wavelet-based or MPEG-4 AVC/H.264-based scalable video coding systems
Spatially Scalable Video Coding (SSVC) Using Motion Compensated Recursive Temporal Filtering (MCRTF)
Through the following years, streaming makers will be progressively tasked supplying enhanced streams of video to gadgets as mobile phones and set top boxes, alongside diverse quality variants for clients to get content on general Internet. While there have been various ways to deal with this issue, including different bit rate feature, one exceptionally solid competitor will be a H.264 expansion called Scalable Video Coding ( SVC). It encodes video into "layers," beginning with the "base" layer, which contains the most minimal information of the bit-stream, and then moving towards “enhanced layers” which includes the information to scale up the output. Also SVC gives support for different resolutions inside a single compressed bit stream which is known as spatial scalabilility. In this thesis a problem on SSVC has been addressed. The video sequences had been made scalable in spatial domain. In order to make it more efficient for real time applications, motion compensated recursive temporal filtering (MCRTF) has been implemented. This scheme enhances the efficiency of the components of a visual signal. The temporal filter used here helps in reducing noisearising from the plurality of the frames and the improvised output with reduced noise is used in the process of predictive encoding. Also it eliminates the inherent drift, which arises due to difference between encoder and decoder. As visual signals are always subjected to temporal correlation, motion compensation from the adjacent frames and using it as the reference during the process of predictive coding is of prior importance. The conventional and the proposed method have been used during the encoding process of various video sequences in the spatial domain and an analytical study on that has been carried ou
Recommended from our members
Scalable and network aware video coding for advanced communications over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service.
In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques.
To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission.
Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service.
To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream
- …