389 research outputs found
Joint in-network video rate adaptation and measurement-based admission control: algorithm design and evaluation
The important new revenue opportunities that multimedia services offer to network and service providers come with important management challenges. For providers, it is important to control the video quality that is offered and perceived by the user, typically known as the quality of experience (QoE). Both admission control and scalable video coding techniques can control the QoE by blocking connections or adapting the video rate but influence each other's performance. In this article, we propose an in-network video rate adaptation mechanism that enables a provider to define a policy on how the video rate adaptation should be performed to maximize the provider's objective (e.g., a maximization of revenue or QoE). We discuss the need for a close interaction of the video rate adaptation algorithm with a measurement based admission control system, allowing to effectively orchestrate both algorithms and timely switch from video rate adaptation to the blocking of connections. We propose two different rate adaptation decision algorithms that calculate which videos need to be adapted: an optimal one in terms of the provider's policy and a heuristic based on the utility of each connection. Through an extensive performance evaluation, we show the impact of both algorithms on the rate adaptation, network utilisation and the stability of the video rate adaptation. We show that both algorithms outperform other configurations with at least 10 %. Moreover, we show that the proposed heuristic is about 500 times faster than the optimal algorithm and experiences only a performance drop of approximately 2 %, given the investigated video delivery scenario
An autonomic delivery framework for HTTP adaptive streaming in multicast-enabled multimedia access networks
The consumption of multimedia services over HTTP-based delivery mechanisms has recently gained popularity due to their increased flexibility and reliability. Traditional broadcast TV channels are now offered over the Internet, in order to support Live TV for a broad range of consumer devices. Moreover, service providers can greatly benefit from offering external live content (e. g., YouTube, Hulu) in a managed way. Recently, HTTP Adaptive Streaming (HAS) techniques have been proposed in which video clients dynamically adapt their requested video quality level based on the current network and device state. Unlike linear TV, traditional HTTP- and HAS-based video streaming services depend on unicast sessions, leading to a network traffic load proportional to the number of multimedia consumers. In this paper we propose a novel HAS-based video delivery architecture, which features intelligent multicasting and caching in order to decrease the required bandwidth considerably in a Live TV scenario. Furthermore we discuss the autonomic selection of multicasted content to support Video on Demand (VoD) sessions. Experiments were conducted on a large scale and realistic emulation environment and compared with a traditional HAS-based media delivery setup using only unicast connections
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Low-Complexity 3D-DWT video encoder applicable to IPTV
3D-DWT encoders are good candidates for applications like professional video editing,
IPTV video surveillance, live event IPTV broadcast, multispectral satellite imaging, HQ
video delivery, etc., where a frame must be reconstructed as fast as possible. However,
the main drawback of the algorithms that compute the 3D-DWT is the huge memory
requirement in practical implementations. In this paper, and in order to considerably
reduce the memory requirements of this kind of video encoders, we present a new
3D-DWT video encoder based on (a) the use of a novel frame-based 3D-DWT transform
that avoids video sequence partitioning in Groups Of Pictures (GOP) and (b) a very fast
run-length encoder. Furthermore, an exhaustive evaluation of the proposed encoder
(3D-RLW) has been performed, analyzing the sensibility of the Âżlters employed in the
3D-DWT transform and comparing the evaluation results with other video encoders in
terms of R/D, coding/decoding delay and memory consumptionThanks to Spanish Ministry of Education and Science under grants DPI2007-66796-C03-03 for funding.López ., O.; Piñol ., P.; Martinez Rach, MO.; Perez Malumbres, MJ.; Oliver Gil, JS. (2011). Low-Complexity 3D-DWT video encoder applicable to IPTV. Signal Processing: Image Communication. 26(7):358-369. https://doi.org/10.1016/j.image.2011.01.008S35836926
TV-Centric technologies to provide remote areas with two-way satellite broadband access
October 1-2, 2007, Rome, Italy TV-Centric Technologies To Provide Remote Areas With Two-Way Satellite Broadband Acces
RBF-Based QP Estimation Model for VBR Control in H.264/SVC
In this paper we propose a novel variable bit rate (VBR) controller for real-time H.264/scalable video coding (SVC) applications. The proposed VBR controller relies on the fact that consecutive pictures within the same scene often exhibit similar degrees of complexity, and consequently should be encoded using similar quantization parameter (QP) values for the sake of quality consistency. In oder to prevent unnecessary QP fluctuations, the proposed VBR controller allows for just an incremental variation of QP with respect to that of the previous picture, focusing on the design of an effective method for estimating this QP variation. The implementation in H.264/SVC requires to locate a rate controller at each dependency layer (spatial or coarse grain scalability). In particular, the QP increment estimation at each layer is computed by means of a radial basis function (RBF) network that is specially designed for this purpose. Furthermore, the RBF network design process was conceived to provide an effective solution for a wide range of practical real-time VBR applications for scalable video content delivery. In order to assess the proposed VBR controller, two real-time application scenarios were simulated: mobile live streaming and IPTV broadcast. It was compared to constant QP encoding and a recently proposed constant bit rate (CBR) controller for H.264/SVC. The experimental results show that the proposed method achieves remarkably consistent quality, outperforming the reference CBR controller in the two scenarios for all the spatio-temporal resolutions considered.Proyecto CCG10-UC3M/TIC-5570 de la Comunidad AutĂłnoma de Madrid y Universidad Carlos III de MadridPublicad
A two-way interactive broadband satellite architecture to break the digital divide barrier
September 24-26, 2007, Turin, Ital
40 Gbps Access for Metro networks: Implications in terms of Sustainability and Innovation from an LCA Perspective
In this work, the implications of new technologies, more specifically the new
optical FTTH technologies, are studied both from the functional and
non-functional perspectives. In particular, some direct impacts are listed in
the form of abandoning non-functional technologies, such as micro-registration,
which would be implicitly required for having a functioning operation before
arrival the new high-bandwidth access technologies. It is shown that such
abandonment of non-functional best practices, which are mainly at the
management level of ICT, immediately results in additional consumption and
environmental footprint, and also there is a chance that some other new
innovations might be 'missed.' Therefore, unconstrained deployment of these
access technologies is not aligned with a possible sustainable ICT picture,
except if they are regulated. An approach to pricing the best practices,
including both functional and non-functional technologies, is proposed in order
to develop a regulation and policy framework for a sustainable broadband
access.Comment: 10 pages, 6 Tables, 1 Figure. Accepted to be presented at the
ICT4S'15 Conferenc
- âŠ