21,079 research outputs found

    On the impact of the GOP size in a temporal H.264/AVC-to-SVC transcoder in baseline and main profile

    Get PDF
    Scalable video coding is a recent extension of the advanced video coding H.264/AVC standard developed jointly by ISO/IEC and ITU-T, which allows adapting the bitstream easily by dropping parts of it named layers. This adaptation makes it possible for a single bitstream to meet the requirements for reliable delivery of video to diverse clients over heterogeneous networks using temporal, spatial or quality scalability, combined or separately. Since the scalable video coding design requires scalability to be provided at the encoder side, existing content cannot benefit from it. Efficient techniques for converting contents without scalability to a scalable format are desirable. In this paper, an approach for temporal scalability transcoding from H.264/AVC to scalable video coding in baseline and main profile is presented and the impact of the GOP size is analyzed. Independently of the GOP size chosen, time savings of around 63 % for baseline profile and 60 % for main profile are achieved while maintaining the coding efficiency

    Transport of video over partial order connections

    Get PDF
    A Partial Order and partial reliable Connection (POC) is an end-to-end transport connection authorized to deliver objects in an order that can differ from the transmitted one. Such a connection is also authorized to lose some objects. The POC concept is motivated by the fact that heterogeneous best-effort networks such as Internet are plagued by unordered delivery of packets and losses, which tax the performances of current applications and protocols. It has been shown, in several research works, that out of order delivery is able to alleviate (with respect to CO service) the use of end systems’ communication resources. In this paper, the efficiency of out-of-sequence delivery on MPEG video streams processing is studied. Firstly, the transport constraints (in terms of order and reliability) that can be relaxed by MPEG video decoders, for improving video transport, are detailed. Then, we analyze the performance gain induced by this approach in terms of blocking times and recovered errors. We demonstrate that POC connections fill not only the conceptual gap between TCP and UDP but also provide real performance improvements for the transport of multimedia streams such MPEG video

    Overview of MV-HEVC prediction structures for light field video

    Get PDF
    Light field video is a promising technology for delivering the required six-degrees-of-freedom for natural content in virtual reality. Already existing multi-view coding (MVC) and multi-view plus depth (MVD) formats, such as MV-HEVC and 3D-HEVC, are the most conventional light field video coding solutions since they can compress video sequences captured simultaneously from multiple camera angles. 3D-HEVC treats a single view as a video sequence and the other sub-aperture views as gray-scale disparity (depth) maps. On the other hand, MV-HEVC treats each view as a separate video sequence, which allows the use of motion compensated algorithms similar to HEVC. While MV-HEVC and 3D-HEVC provide similar results, MV-HEVC does not require any disparity maps to be readily available, and it has a more straightforward implementation since it only uses syntax elements rather than additional prediction tools for inter-view prediction. However, there are many degrees of freedom in choosing an appropriate structure and it is currently still unknown which one is optimal for a given set of application requirements. In this work, various prediction structures for MV-HEVC are implemented and tested. The findings reveal the trade-off between compression gains, distortion and random access capabilities in MVHEVC light field video coding. The results give an overview of the most optimal solutions developed in the context of this work, and prediction structure algorithms proposed in state-of-the-art literature. This overview provides a useful benchmark for future development of light field video coding solutions

    A joint motion & disparity motion estimation technique for 3D integral video compression using evolutionary strategy

    Get PDF
    3D imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. Just like any digital video, 3D video sequences must also be compressed in order to make it suitable for consumer domain applications. However, ordinary compression techniques found in state-of-the-art video coding standards such as H.264, MPEG-4 and MPEG-2 are not capable of producing enough compression while preserving the 3D clues. Fortunately, a huge amount of redundancies can be found in an integral video sequence in terms of motion and disparity. This paper discusses a novel approach to use both motion and disparity information to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression. We further propose an optimization technique based on evolutionary strategies to minimize the computational complexity of the joint motion disparity estimation. Experimental results demonstrate that Joint Motion and Disparity Estimation can achieve over 1 dB objective quality gain over normal motion estimation. Once combined with Evolutionary strategy, this can achieve up to 94% computational cost saving

    Prediction error image coding using a modified stochastic vector quantization scheme

    Get PDF
    The objective of this paper is to provide an efficient and yet simple method to encode the prediction error image of video sequences, based on a stochastic vector quantization (SVQ) approach that has been modified to cope with the intrinsic decorrelated nature of the prediction error image of video signals. In the SVQ scheme, the codewords are generated by stochastic techniques instead of being generated by a training set representative of the expected input image as is normal use in VQ. The performance of the scheme is shown for the particular case of segmentation-based video coding although the technique can be also applied to motion-compensated hybrid coding schemes.Peer ReviewedPostprint (published version

    3D video coding and transmission

    Get PDF
    The capture, transmission, and display of 3D content has gained a lot of attention in the last few years. 3D multimedia content is no longer con fined to cinema theatres but is being transmitted using stereoscopic video over satellite, shared on Blu-RayTMdisks, or sent over Internet technologies. Stereoscopic displays are needed at the receiving end and the viewer needs to wear special glasses to present the two versions of the video to the human vision system that then generates the 3D illusion. To be more e ffective and improve the immersive experience, more views are acquired from a larger number of cameras and presented on di fferent displays, such as autostereoscopic and light field displays. These multiple views, combined with depth data, also allow enhanced user experiences and new forms of interaction with the 3D content from virtual viewpoints. This type of audiovisual information is represented by a huge amount of data that needs to be compressed and transmitted over bandwidth-limited channels. Part of the COST Action IC1105 \3D Content Creation, Coding and Transmission over Future Media Networks" (3DConTourNet) focuses on this research challenge.peer-reviewe
    • 

    corecore