250,120 research outputs found
Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band
This paper was published in OPTICS EXPRESS and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/OE.19.00B895. Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under law[EN] The paper addresses the problem of distribution of highdefinition video over fiber-wireless networks. The physical layer architecture with the low complexity envelope detection solution is investigated. We present both experimental studies and simulation of high quality high-definition compressed video transmission over 60 GHz fiberwireless link. Using advanced video coding we satisfy low complexity and low delay constraints, meanwhile preserving the superb video quality after significantly extended wireless distance. © 2011 Optical Society of America.This work has been partly funded by the European Commission under FP7 ICT-249142 FIVER project and by the by the Spanish Ministry of Science and Innovation under the TEC2009-14250 ULTRADEF project.Lebedev, A.; Pham, T.; Beltrán RamĂrez, M.; Yu, X.; Ukhanova, A.; Llorente Sáez, R.; Monroy, I.... (2011). Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band. Optics Express. 19(26):895-904. https://doi.org/10.1364/OE.19.00B895S8959041926Stockhammer, T., Hannuksela, M. M., & Wiegand, T. (2003). H.264/AVC in wireless environments. IEEE Transactions on Circuits and Systems for Video Technology, 13(7), 657-673. doi:10.1109/tcsvt.2003.815167Yong, S. K., & Chong, C.-C. (2006). An Overview of Multigigabit Wireless through Millimeter Wave Technology: Potentials and Technical Challenges. EURASIP Journal on Wireless Communications and Networking, 2007(1). doi:10.1155/2007/7890
QoE-Based Low-Delay Live Streaming Using Throughput Predictions
Recently, HTTP-based adaptive streaming has become the de facto standard for
video streaming over the Internet. It allows clients to dynamically adapt media
characteristics to network conditions in order to ensure a high quality of
experience, that is, minimize playback interruptions, while maximizing video
quality at a reasonable level of quality changes. In the case of live
streaming, this task becomes particularly challenging due to the latency
constraints. The challenge further increases if a client uses a wireless
network, where the throughput is subject to considerable fluctuations.
Consequently, live streams often exhibit latencies of up to 30 seconds. In the
present work, we introduce an adaptation algorithm for HTTP-based live
streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is
designed to operate with a transport latency of few seconds. To reach this
goal, LOLYPOP leverages TCP throughput predictions on multiple time scales,
from 1 to 10 seconds, along with an estimate of the prediction error
distribution. In addition to satisfying the latency constraint, the algorithm
heuristically maximizes the quality of experience by maximizing the average
video quality as a function of the number of skipped segments and quality
transitions. In order to select an efficient prediction method, we studied the
performance of several time series prediction methods in IEEE 802.11 wireless
access networks. We evaluated LOLYPOP under a large set of experimental
conditions limiting the transport latency to 3 seconds, against a
state-of-the-art adaptation algorithm from the literature, called FESTIVE. We
observed that the average video quality is by up to a factor of 3 higher than
with FESTIVE. We also observed that LOLYPOP is able to reach a broader region
in the quality of experience space, and thus it is better adjustable to the
user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group,
Technische Universitaet Berlin. This TR updated TR TKN-15-00
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Streaming Non-monotone Submodular Maximization: Personalized Video Summarization on the Fly
The need for real time analysis of rapidly producing data streams (e.g.,
video and image streams) motivated the design of streaming algorithms that can
efficiently extract and summarize useful information from massive data "on the
fly". Such problems can often be reduced to maximizing a submodular set
function subject to various constraints. While efficient streaming methods have
been recently developed for monotone submodular maximization, in a wide range
of applications, such as video summarization, the underlying utility function
is non-monotone, and there are often various constraints imposed on the
optimization problem to consider privacy or personalization. We develop the
first efficient single pass streaming algorithm, Streaming Local Search, that
for any streaming monotone submodular maximization algorithm with approximation
guarantee under a collection of independence systems ,
provides a constant approximation guarantee for maximizing a
non-monotone submodular function under the intersection of and
knapsack constraints. Our experiments show that for video summarization, our
method runs more than 1700 times faster than previous work, while maintaining
practically the same performance
- …