908 research outputs found
Low-complexity motion estimation for the Scalable Video Coding extension of H.264/AVC
The recently standardized Scalable Video Coding(SVC) extension of H.264/AVC allows bitstream scalability with improved rate-distortion efficiency with respect to the classical Simulcasting approach, at the cost of an increased computational complexity of the encoding process. So one critical issue related to practical deployment of SVC is the complexity reduction, fundamental to use it in consumer applications. In this paper, we present a fully scalable fast motion estimation algorithm that enables an excellent complexity performance
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Implementation of Video Compression Standards in Digital Television
In this paper, a video compression standard used in digital television systems is discussed. Basic concepts of video compression and principles of lossy and lossless compression are given. Techniques of video compression (intraframe and interframe compression), the type of frames and principles of the bit rate compression are discussed. Characteristics of standard-definition television (SDTV), high-definition television (HDTV) and ultra-high-definition television (UHDTV) are given. The principles of the MPEG-2, MPEG-4 and High Efficiency Video Coding (HEVC) compression standards are analyzed. Overview of basic standards of video compression and the impact of compression on the quality of TV images and the number of TV channels in the multiplexes of terrestrial and satellite digital TV transmission are shown. This work is divided into six sections
Assessing quality of experience of IPTV and video on demand services in real-life environments
The ever growing bandwidth in access networks, in combination with IPTV and video on demand (VoD) offerings, opens up unlimited possibilities to the users. The operators can no longer compete solely on the number of channels or content and increasingly make high definition channels and quality of experience (QoE) a service differentiator. Currently, the most reliable way of assessing and measuring QoE is conducting subjective experiments, where human observers evaluate a series of short video sequences, using one of the international standardized subjective quality assessment methodologies. Unfortunately, since these subjective experiments need to be conducted in controlled environments and pose limitations on the sequences and overall experiment duration they cannot be used for real-life QoE assessment of IPTV and VoD services. In this article, we propose a novel subjective quality assessment methodology based on full-length movies. Our methodology enables audiovisual quality assessment in the same environments and under the same conditions users typically watch television. Using our new methodology we conducted subjective experiments and compared the outcome with the results from a subjective test conducted using a standardized method. Our findings indicate significant differences in terms of impairment visibility and tolerance and highlight the importance of real-life QoE assessment
Motion estimation and signaling techniques for 2D+t scalable video coding
We describe a fully scalable wavelet-based 2D+t (in-band) video coding architecture. We propose new coding tools specifically designed for this framework aimed at two goals: reduce the computational complexity at the encoder without sacrificing compression; improve the coding efficiency, especially at low bitrates. To this end, we focus our attention on motion estimation and motion vector encoding. We propose a fast motion estimation algorithm that works in the wavelet domain and exploits the geometrical properties of the wavelet subbands. We show that the computational complexity grows linearly with the size of the search window, yet approaching the performance of a full search strategy. We extend the proposed motion estimation algorithm to work with blocks of variable sizes, in order to better capture local motion characteristics, thus improving in terms of rate-distortion behavior. Given this motion field representation, we propose a motion vector coding algorithm that allows to adaptively scale the motion bit budget according to the target bitrate, improving the coding efficiency at low bitrates. Finally, we show how to optimally scale the motion field when the sequence is decoded at reduced spatial resolution. Experimental results illustrate the advantages of each individual coding tool presented in this paper. Based on these simulations, we define the best configuration of coding parameters and we compare the proposed codec with MC-EZBC, a widely used reference codec implementing the t+2D framework
Broadcasting scalable video with generalized spatial modulation in cellular networks
This paper considers the transmission of scalable video via broadcast and multicast to increase spectral and energy efïŹciency in cellular networks. To address this problem, we study the use of generalized spatial modulation (GSM) combined with non-orthogonal hierarchical M-QAM modulations due to the capability to exploit the potential gains of large scale antenna systems and achieve high spectral and energy efficiencies. We introduce the basic idea of broadcasting/multicasting scalable video associated to GSM, and discuss the key limitations. Non-uniform hierarchical QAM constellations are used for broadcasting/multicasting scalable video while user specific messages are carried implicitly on the indexes of the active transmit antennas combinations. To deal with multiple video and dedicated user streams multiplexed on the same transmission, an iterative receiver with reduced complexity is described. 5G New Radio (NR) based link and system level results are presented. Two different ways of quadruplicating the number of broadcasting programs are evaluated and compared. Performance results show that the proposed GSM scheme is capable of achieving flexibility and energy efficiency gain over conventional multiple input multiple output (MIMO) schemes.info:eu-repo/semantics/publishedVersio
Recommended from our members
3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes
Three-dimensional TV is expected to be the next revolution in the history of television. We implemented a 3D TV prototype system with real-time acquisition, transmission, and 3D display of dynamic scenes. We developed a distributed, scalable architecture to manage the high computation and bandwidth demands. Our system consists of an array of cameras, clusters of network-connected PCs, and a multi-projector 3D display. Multiple video streams are individually encoded and sent over a broadband network to the display. The 3D display shows high-resolution (1024 Ă 768) stereoscopic color images for multiple viewpoints without special glasses. We implemented systems with rear-projection and front-projection lenticular screens. In this paper, we provide a detailed overview of our 3D TV system, including an examination of design choices and tradeoffs. We present the calibration and image alignment procedures that are necessary to achieve good image quality. We present qualitative results and some early user feedback. We believe this is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience.Engineering and Applied Science
Layer Selection in Progressive Transmission of Motion-Compensated JPEG2000 Video
MCJ2K (Motion-Compensated JPEG2000) is a video codec based on MCTF (Motion- Compensated Temporal Filtering) and J2K (JPEG2000). MCTF analyzes a sequence of images, generating a collection of temporal sub-bands, which are compressed with J2K. The R/D (Rate-Distortion) performance in MCJ2K is better than the MJ2K (Motion JPEG2000) extension, especially if there is a high level of temporal redundancy. MCJ2K codestreams can be served by standard JPIP (J2K Interactive Protocol) servers, thanks to the use of only J2K standard file formats. In bandwidth-constrained scenarios, an important issue in MCJ2K is determining the amount of data of each temporal sub-band that must be transmitted to maximize the quality of the reconstructions at the client side. To solve this problem, we have proposed two rate-allocation algorithms which provide reconstructions that are progressive in quality. The first, OSLA (Optimized Sub-band Layers Allocation), determines the best progression of quality layers, but is computationally expensive. The second, ESLA (Estimated-Slope sub-band Layers Allocation), is sub-optimal in most cases, but much faster and more convenient for real-time streaming scenarios. An experimental comparison shows that even when a straightforward motion compensation scheme is used, the R/D performance of MCJ2K competitive is compared not only to MJ2K, but also with respect to other standard scalable video codecs
- âŠ