19,798 research outputs found
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Rtp and the datagram congestion control protocol
We describe how the new Datagram Congestion Control Protocol (DCCP) can be used as a bearer for the Real-time Transport Protocol (RTP) to provide a congestion controlled basis for networked multimedia applications. This is a step towards deployment of congestion control for such applications, necessary to ensure the future stability of the best-effort network if high-bandwidth streaming and IPTV services are to be deployed outside of closed QoS-managed networks
Network coding meets TCP
We propose a mechanism that incorporates network coding into TCP with only
minor changes to the protocol stack, thereby allowing incremental deployment.
In our scheme, the source transmits random linear combinations of packets
currently in the congestion window. At the heart of our scheme is a new
interpretation of ACKs - the sink acknowledges every degree of freedom (i.e., a
linear combination that reveals one unit of new information) even if it does
not reveal an original packet immediately. Such ACKs enable a TCP-like
sliding-window approach to network coding. Our scheme has the nice property
that packet losses are essentially masked from the congestion control
algorithm. Our algorithm therefore reacts to packet drops in a smooth manner,
resulting in a novel and effective approach for congestion control over
networks involving lossy links such as wireless links. Our experiments show
that our algorithm achieves higher throughput compared to TCP in the presence
of lossy wireless links. We also establish the soundness and fairness
properties of our algorithm.Comment: 9 pages, 9 figures, submitted to IEEE INFOCOM 200
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
Q-AIMD: A Congestion Aware Video Quality Control Mechanism
Following the constant increase of the multimedia traffic, it seems necessary to allow transport protocols to be aware of the video quality of the transmitted flows rather than the throughput. This paper proposes a novel transport mechanism adapted to video flows. Our proposal, called Q-AIMD for video quality AIMD (Additive Increase Multiplicative Decrease), enables fairness in video quality while transmitting multiple video flows. Targeting video quality fairness allows improving the overall video quality for all transmitted flows, especially when the transmitted videos provide various types of content with different spatial resolutions. In addition, Q-AIMD mitigates the occurrence of network congestion events, and dissolves the congestion whenever it occurs by decreasing the video quality and hence the bitrate. Using different video quality metrics, Q-AIMD is evaluated with different video contents and spatial resolutions. Simulation results show that Q-AIMD allows an improved overall video quality among the multiple transmitted video flows compared to a throughput-based congestion control by decreasing significantly the quality discrepancy between them
Joint in-network video rate adaptation and measurement-based admission control: algorithm design and evaluation
The important new revenue opportunities that multimedia services offer to network and service providers come with important management challenges. For providers, it is important to control the video quality that is offered and perceived by the user, typically known as the quality of experience (QoE). Both admission control and scalable video coding techniques can control the QoE by blocking connections or adapting the video rate but influence each other's performance. In this article, we propose an in-network video rate adaptation mechanism that enables a provider to define a policy on how the video rate adaptation should be performed to maximize the provider's objective (e.g., a maximization of revenue or QoE). We discuss the need for a close interaction of the video rate adaptation algorithm with a measurement based admission control system, allowing to effectively orchestrate both algorithms and timely switch from video rate adaptation to the blocking of connections. We propose two different rate adaptation decision algorithms that calculate which videos need to be adapted: an optimal one in terms of the provider's policy and a heuristic based on the utility of each connection. Through an extensive performance evaluation, we show the impact of both algorithms on the rate adaptation, network utilisation and the stability of the video rate adaptation. We show that both algorithms outperform other configurations with at least 10 %. Moreover, we show that the proposed heuristic is about 500 times faster than the optimal algorithm and experiences only a performance drop of approximately 2 %, given the investigated video delivery scenario
- âŠ