4 research outputs found

    Viewpoint switching in multiview videos using SP-frames

    Full text link
    The distinguishing feature of multiview video lies in the interactivity, which allows users to select their favourite viewpoint. It switches bitstream at a particular view when necessary instead of transmitting all the views. The new SP-frame in H.264 is originally developed for multiple bit-rate streaming with the support of seamless switching. The SP-frame can also be directly employed in the viewpoint switching of multiview videos. Notwithstanding the guarantee of seamless switching using SP-frames, the cost is the bulky size of secondary SP-frames. This induces a significant amount of additional space or bandwidth for storage or transmission, especially for the multiview scenario. For this reason, a new motion estimation and compensation technique operating in the quantized transform (QDCT) domain is designed for coding secondary SP-frame in this paper. Our proposed work aims at keeping the secondary SP-frames as small as possible without affecting the size of primary SP-frames by incorporating QDCT-domain motion estimation and compensation in the secondary SP-frame coding. Simulation results show that the size of secondary SP-frames can be reduced remarkably in viewpoint switching. Index Terms — Multiview, viewpoint switching, SP-frame, QDCT-domain, motion estimatio

    The improved SP frame coding technique for the JVT standard

    No full text
    An efficient and flexible coding technique is proposed in this paper inspired by the SP frame in the H.26L standard, which can achieve a drift-free bitstream switching at the predicted frame. The proposed scheme improves the coding efficiency of the SP frames in the H.26L standard by limiting the mismatch between the references for the prediction and reconstruction with two DCT coefficient coding modes and the rate-distortion optimization. Furthermore, the proposed scheme allows independent quantization parameters for up-switching and down-switching bitstreams. It further reduces the switching bitstream size while keeping the coding efficiency of the normal bitstreams. More rapid and frequent down-switching than up-switching and much smaller size of down-switching bitstream can be achieved with the proposed SP technique. These are very desirable features for any TCP-friendly protocols. Compared with the original SP method for H.26L, the proposed SP method improves the coding efficiency up to 1.0dB. This SP technique has been officially accepted by the JVT standard. 1

    Efficient Support for Application-Specific Video Adaptation

    Get PDF
    As video applications become more diverse, video must be adapted in different ways to meet the requirements of different applications when there are insufficient resources. In this dissertation, we address two sorts of requirements that cannot be addressed by existing video adaptation technologies: (i) accommodating large variations in resolution and (ii) collecting video effectively in a multi-hop sensor network. In addition, we also address requirements for implementing video adaptation in a sensor network. Accommodating large variation in resolution is required by the existence of display devices with widely disparate screen sizes. Existing resolution adaptation technologies usually aim at adapting video between two resolutions. We examine the limitations of these technologies that prevent them from supporting a large number of resolutions efficiently. We propose several hybrid schemes and study their performance. Among these hybrid schemes, Bonneville, a framework that combines multiple encodings with limited scalability, can make good trade-offs when organizing compressed video to support a wide range of resolutions. Video collection in a sensor network requires adapting video in a multi-hop storeand- forward network and with multiple video sources. This task cannot be supported effectively by existing adaptation technologies, which are designed for real-time streaming applications from a single source over IP-style end-to-end connections. We propose to adapt video in the network instead of at the network edge. We also propose a framework, Steens, to compose adaptation mechanisms on multiple nodes. We design two signaling protocols in Steens to coordinate multiple nodes. Our simulations show that in-network adaptation can use buffer space on intermediate nodes for adaptation and achieve better video quality than conventional network-edge adaptation. Our simulations also show that explicit collaboration among multiple nodes through signaling can improve video quality, waste less bandwidth, and maintain bandwidth-sharing fairness. The implementation of video adaptation in a sensor network requires system support for programmability, retaskability, and high performance. We propose Cascades, a component-based framework, to provide the required support. A prototype implementation of Steens in this framework shows that the performance overhead is less than 5% compared to a hard-coded C implementation
    corecore