9,860 research outputs found

    Multi-loop quality scalability based on high efficiency video coding

    Get PDF
    Scalable video coding performance largely depends on the underlying single layer coding efficiency. In this paper, the quality scalability capabilities are evaluated on a base of the new High Efficiency Video Coding (HEVC) standard under development. To enable the evaluation, a multi-loop codec has been designed using HEVC. Adaptive inter-layer prediction is realized by including the lower layer in the reference list of the enhancement layer. As a result, adaptive scalability on frame level and on prediction unit level is accomplished. Compared to single layer coding, 19.4% Bjontegaard Delta bitrate increase is measured over approximately a 30dB to 40dB PSNR range. When compared to simulcast, 20.6% bitrate reduction can be achieved. Under equivalent conditions, the presented technique achieves 43.8% bitrate reduction over Coarse Grain Scalability of the SVC - H.264/AVC-based standard

    Video adaptation for mobile digital television

    Get PDF
    Mobile digital television is one of the new services introduced recently by telecommunications operators in the market. Due to the possibilities of personalization and interaction provided, together with the increasing demand of this type of portable services, it would be expected to be a successful technology in near future. Video contents stored and transmitted over the networks deployed to provide mobile digital television need to be compressed to reduce the resources required. The compression scheme chosen by the great majority of these networks is H.264/AVC. Compressed video bitstreams have to be adapted to heterogeneous networks and a wide range of terminals. To deal with this problem scalable video coding schemes were proposed and standardized providing temporal, spatial and quality scalability using layers within the encoded bitstream. Because existing H.264/AVC contents cannot benefit from scalability tools, efficient techniques for migration of single-layer to scalable contents are desirable for supporting these mobile digital television systems. This paper proposes a technique to convert from single-layer H.264/AVC bitstream to a scalable bitstream with temporal scalability. Applying this approach, a reduction of 60% of coding complexity is achieved while maintaining the coding efficiency

    Scalable Video Coding

    Get PDF
    International audienceWith the evolution of Internet to heterogeneous networks both in terms of processing power and network bandwidth, different users demand the different versions of the same content. This has given birth to the scalable era of video content where a single bitstream contains multiple versions of the same video content which can be different in terms of resolutions, frame rates or quality. Several early standards, like MPEG2 video, H.263, and MPEG4 part II already include tools to provide different modalities of scalability. However, the scalable profiles of these standards are seldom used. This is because the scalability comes with significant loss in coding efficiency and the Internet was at its early stage. Scalable extension of H.264/AVC is named scalable video coding and is published in July 2007. It has several new coding techniques developed and it reduces the gap of coding efficiency with state-of-the-art non-scalable codec while keeping a reasonable complexity increase. After an introduction to scalable video coding, we present a proposition regarding the scalable functionality of H.264/AVC, which is the improvement of the compression ratio in enhancement layers (ELs) of subband/wavelet based scalable bitstream. A new adaptive scanning methodology for intra frame scalable coding framework based on subband/wavelet coding approach is presented for H.264/AVC scalable video coding. It takes advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264/AVC, we can get better compression, without any compromise on PSNR

    Measurement of plant growth in view of an integrative analysis of regulatory networks

    Get PDF
    As the regulatory networks of growth at the cellular level are elucidated at a fast pace, their complexity is not reduced; on the contrary, the tissue, organ and even whole-plant level affect cell proliferation and expansion by means of development-induced and environment-induced signaling events in growth regulatory processes. Measurement of growth across different levels aids in gaining a mechanistic understanding of growth, and in defining the spatial and temporal resolution of sampling strategies for molecular analyses in the model Arabidopsis thaliana and increasingly also in crop species. The latter claim their place at the forefront of plant research, since global issues and future needs drive the translation from laboratory model-acquired knowledge of growth processes to improvements in crop productivity in field conditions

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Contextual Media Retrieval Using Natural Language Queries

    Full text link
    The widespread integration of cameras in hand-held and head-worn devices as well as the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images and videos using spatio-temporal natural language queries. We evaluate our system using a new dataset of real user queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability, for example in the resolution of spatial relations in natural language utterances. We show that our retrieval system can cope with this variability using personalisation through an online learning-based retrieval formulation.Comment: 8 pages, 9 figures, 1 tabl

    Early forest fire detection by vision-enabled wireless sensor networks

    Get PDF
    Wireless sensor networks constitute a powerful technology particularly suitable for environmental monitoring. With regard to wildfires, they enable low-cost fine-grained surveillance of hazardous locations like wildland-urban interfaces. This paper presents work developed during the last 4 years targeting a vision-enabled wireless sensor network node for the reliable, early on-site detection of forest fires. The tasks carried out ranged from devising a robust vision algorithm for smoke detection to the design and physical implementation of a power-efficient smart imager tailored to the characteristics of such an algorithm. By integrating this smart imager with a commercial wireless platform, we endowed the resulting system with vision capabilities and radio communication. Numerous tests were arranged in different natural scenarios in order to progressively tune all the parameters involved in the autonomous operation of this prototype node. The last test carried out, involving the prescribed burning of a 95 x 20-m shrub plot, confirmed the high degree of reliability of our approach in terms of both successful early detection and a very low false-alarm rate. Journal compilationMinisterio de Ciencia e InnovaciĂłn TEC2009-11812, IPT-2011-1625-430000Office of Naval Research (USA) N000141110312Centro para el Desarrollo TecnolĂłgico e Industrial IPC-2011100

    Simulation and experimental testbed for adaptive video streaming in ad hoc networks

    Full text link
    [EN] This paper presents a performance evaluation of the scalable video streaming over mobile ad hoc networks. In particular, we focus on the rate-adaptive method for streaming scalable video (H.264/SVC). For effective adaptation a new cross-layer routing protocol is introduced. This protocol provides an efficient algorithm for available bandwidth estimation. With this information, the video source adjusts its bit rate during the video transmission according to the network state. We also propose a free simulation framework that supports evaluation studies for scalable video streaming. The simulation experiments performed in this study involve the transmission of SVC streams with Medium Grain Scalability (MGS) as well as temporal scalability over different network scenarios. The results reveal that the rate-adaptive strategy helps avoid or reduce the congestion in MANETs obtaining a better quality in the received videos. Additionally, an actual ad hoc network was implemented using embedded devices (Raspberry Pi) in order to assess the performance of the proposed adaptive transmission mechanism in a real environment. Additional experiments were carried out prior to the implementation with the aim of characterizing the wireless medium and packet loss profile. Finally, the proposed approach shows an important reduction in energy consumption, as the study revealed.This paper was performed with the support of the National Secretary of Higher Education, Science, Technology and Innovation (SENESCYT)–Ecuador Government (scholarship 195-2012) and the Multimedia Communications Group (COMM) belong to the Institute of Telecommunications and Multimedia Applications (iTEAM)-Universitat Politùcnica de Valùncia.Gonzalez-Martinez, SR.; Castellanos Hernández, WE.; Guzmán Castillo, PF.; Arce Vila, P.; Guerri Cebollada, JC. (2016). Simulation and experimental testbed for adaptive video streaming in ad hoc networks. Ad Hoc Networks. 52:89-105. https://doi.org/10.1016/j.adhoc.2016.07.007S891055

    Fully Scalable Video Coding Using Redundant-Wavelet Multihypothesis and Motion-Compensated Temporal Filtering

    Get PDF
    In this dissertation, a fully scalable video coding system is proposed. This system achieves full temporal, resolution, and fidelity scalability by combining mesh-based motion-compensated temporal filtering, multihypothesis motion compensation, and an embedded 3D wavelet-coefficient coder. The first major contribution of this work is the introduction of the redundant-wavelet multihypothesis paradigm into motion-compensated temporal filtering, which is achieved by deploying temporal filtering in the domain of a spatially redundant wavelet transform. A regular triangle mesh is used to track motion between frames, and an affine transform between mesh triangles implements motion compensation within a lifting-based temporal transform. Experimental results reveal that the incorporation of redundant-wavelet multihypothesis into mesh-based motion-compensated temporal filtering significantly improves the rate-distortion performance of the scalable coder. The second major contribution is the introduction of a sliding-window implementation of motion-compensated temporal filtering such that video sequences of arbitrarily length may be temporally filtered using a finite-length frame buffer without suffering from severe degradation at buffer boundaries. Finally, as a third major contribution, a novel 3D coder is designed for the coding of the 3D volume of coefficients resulting from the redundant-wavelet based temporal filtering. This coder employs an explicit estimate of the probability of coefficient significance to drive a nonadaptive arithmetic coder, resulting in a simple software implementation. Additionally, the coder offers the possibility of a high degree of vectorization particularly well suited to the data-parallel capabilities of modern general-purpose processors or customized hardware. Results show that the proposed coder yields nearly the same rate-distortion performance as a more complicated coefficient coder considered to be state of the art
    • 

    corecore