658 research outputs found

    Design and evaluation of a DASH-compliant second screen video player for live events in mobile scenarios

    Get PDF
    The huge diffusion of mobile devices is rapidly changing the way multimedia content is consumed. Mobile devices are often used as a second screen, providing complementary information on the content shown on the primary screen, as different camera angles in case of a sport event. The introduction of multiple camera angles poses many challenges with respect to guaranteeing a high Quality of Experience to the end user, especially when the live aspect, different devices and highly variable network conditions typical of mobile environments come into play. Due to the ability of HTTP Adaptive Streaming (HAS) protocols to dynamically adapt to bandwidth fluctuations, they are especially suited for the delivery of multimedia content in mobile environments. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate quality level to be dynamically requested, based on the current network conditions. Recently, a standardized solution has been proposed by the MPEG consortium, called Dynamic Adaptive Streaming over HTTP (DASH). We present in this paper a DASH-compliant iOS video player designed to support research on rate adaptation heuristics for live second screen scenarios in mobile environments. The video player allows to monitor the battery consumption and CPU usage of the mobile device and to provide this information to the heuristic. Live and Video-on-Demand streaming scenarios and real-time multi-video switching are supported as well. Quantitative results based on real 3G traces are reported on how the developed prototype has been used to benchmark two existing heuristics and to analyse the main aspects affecting battery lifetime in mobile video streaming

    QoE-centric management of advanced multimedia services

    Get PDF
    Over the last years, multimedia content has become more prominent than ever. Particularly, video streaming is responsible for more than a half of the total global bandwidth consumption on the Internet. As the original Internet was not designed to deliver such real-time, bandwidth-consuming applications, a serious challenge is posed on how to efficiently provide the best service to the users. This requires a shift in the classical approach used to deliver multimedia content, from a pure Quality of Service (QoS) to a full Quality of Experience (QoE) perspective. While QoS parameters are mainly related to low-level network aspects, the QoE reflects how the end-users perceive a particular multimedia service. As the relationship between QoS parameters and QoE is far from linear, a classical QoS-centric delivery is not able to fully optimize the quality as perceived by the users. This paper provides an overview of the main challenges this PhD aims to tackle in the field of end-to-end QoE optimization of video streaming services and, more precisely, of HTTP Adaptive Streaming (HAS) solutions, which are quickly becoming the de facto standard for video delivery over the Internet

    Foveated Video Streaming for Cloud Gaming

    Full text link
    Good user experience with interactive cloud-based multimedia applications, such as cloud gaming and cloud-based VR, requires low end-to-end latency and large amounts of downstream network bandwidth at the same time. In this paper, we present a foveated video streaming system for cloud gaming. The system adapts video stream quality by adjusting the encoding parameters on the fly to match the player's gaze position. We conduct measurements with a prototype that we developed for a cloud gaming system in conjunction with eye tracker hardware. Evaluation results suggest that such foveated streaming can reduce bandwidth requirements by even more than 50% depending on parametrization of the foveated video coding and that it is feasible from the latency perspective.Comment: Submitted to: IEEE 19th International Workshop on Multimedia Signal Processin

    QoE on media deliveriy in 5G environments

    Get PDF
    231 p.5G expandirá las redes móviles con un mayor ancho de banda, menor latencia y la capacidad de proveer conectividad de forma masiva y sin fallos. Los usuarios de servicios multimedia esperan una experiencia de reproducción multimedia fluida que se adapte de forma dinámica a los intereses del usuario y a su contexto de movilidad. Sin embargo, la red, adoptando una posición neutral, no ayuda a fortalecer los parámetros que inciden en la calidad de experiencia. En consecuencia, las soluciones diseñadas para realizar un envío de tráfico multimedia de forma dinámica y eficiente cobran un especial interés. Para mejorar la calidad de la experiencia de servicios multimedia en entornos 5G la investigación llevada a cabo en esta tesis ha diseñado un sistema múltiple, basado en cuatro contribuciones.El primer mecanismo, SaW, crea una granja elástica de recursos de computación que ejecutan tareas de análisis multimedia. Los resultados confirman la competitividad de este enfoque respecto a granjas de servidores. El segundo mecanismo, LAMB-DASH, elige la calidad en el reproductor multimedia con un diseño que requiere una baja complejidad de procesamiento. Las pruebas concluyen su habilidad para mejorar la estabilidad, consistencia y uniformidad de la calidad de experiencia entre los clientes que comparten una celda de red. El tercer mecanismo, MEC4FAIR, explota las capacidades 5G de analizar métricas del envío de los diferentes flujos. Los resultados muestran cómo habilita al servicio a coordinar a los diferentes clientes en la celda para mejorar la calidad del servicio. El cuarto mecanismo, CogNet, sirve para provisionar recursos de red y configurar una topología capaz de conmutar una demanda estimada y garantizar unas cotas de calidad del servicio. En este caso, los resultados arrojan una mayor precisión cuando la demanda de un servicio es mayor

    HbbTV-compliant Platform for Hybrid Media Delivery and Synchronization on Single- and Multi-Device Scenarios

    Full text link
    [EN] The combination of broadcast and broadband (hybrid) technologies for delivering TV related media contents can bring fascinating opportunities. It is motivated by the large amount and diversity of media contents, together with the ubiquity and multiple connectivity capabilities of modern consumption devices. This paper presents an end-to-end platform for the preparation, delivery, and synchronized consumption of related hybrid (broadcast/broadband) media contents on a single device and/or on multiple close-by devices (i.e., a multi-device scenario). It is compatible with the latest version of the Hybrid Broadcast Broadband TV (HbbTV) standard (version 2.0.1). Additionally, it provides adaptive and efficient solutions for key issues not specified in that standard, but that are necessary to successfully deploy hybrid and multidevice media services. Moreover, apart from MPEG-DASH and HTML5, which are the broadband technologies adopted by HbbTV, the platform also provides support for using HTTP Live Streaming and Real-time Transport Protocol and its companion RTP Control Protocol broadband technologies. The presented platform can provide support for many hybrid media services. In this paper, in order to evaluate it, the use case of multi-device and multi-view TV service has been selected. The results of both objective and subjective assessments have been very satisfactory, in terms of performance (stability, smooth playout, delays, and sync accuracy), usability of the platform, usefulness of its functionalities, and the awaken interest in these kinds of platforms.This work was supported in part by the "Fondo Europeo de Desarrollo Regional" and in part by the Spanish Ministry of Economy and Competitiveness through R&D&I Support Program under Grant TEC2013-45492-R.Boronat, F.; Marfil-Reguero, D.; Montagud, M.; Pastor Castillo, FJ. (2017). HbbTV-compliant Platform for Hybrid Media Delivery and Synchronization on Single- and Multi-Device Scenarios. IEEE Transactions on Broadcasting. 1-26. https://doi.org/10.1109/TBC.2017.2781124S12

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content

    Full text link
    [EN] The current wide range of heterogeneous consumption devices and delivery technologies, offers the opportunity to provide related contents in order to enhance and enrich the TV consumption experience. This paper describes a solution to handle the delivery and synchronous consumption of traditional broadcast TV content and related broadband omnidirectional video content. The solution is intended to support both hybrid (broadcast/broadband) delivery technologies and has been designed to be compatible with the Hybrid Broadcast Broadband TV (HbbTV) standard. In particular, some specifications of HbbTV, such as the use of global timestamps or discovery mechanisms, have been adopted. However, additional functionalities have been designed to achieve accurate synchronization and to support the playout of omnidirectional video content in current consumption devices. In order to prove that commercial hybrid environments could be immediately enhanced with this type of content, the proposed solution has been included in a testbed, and objectively and subjectively evaluated. Regarding the omnidirectional video content, the two most common types of projections are supported: equirectangular and cube map. The results of the objective assessment show that the playout of broadband delivered omnidirectional video content in companion devices can be accurately synchronized with the playout on TV of traditional broadcast 2D content. The results of the subjective assessment show the high interest of users in this type of new enriched and immersive experience that contributes to enhance their Quality of Experience (QoE) and engagement.This work was supported by the Generalitat Valenciana, Investigacion Competitiva Proyectos, through the Research and Development Program Grants for Research Groups to be Consolidated, under Grant AICO/2017/059 and Grant AICO/2017Marfil-Reguero, D.; Boronat, F.; López, J.; Vidal Meló, A. (2019). Enhancing the broadcasted TV consumption experience with broadband omnidirectional video content. IEEE Access. 7:171864-171883. https://doi.org/10.1109/ACCESS.2019.2956084S171864171883

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    Toward hyper-realistic and interactive social VR experiences in live TV scenarios

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Social Virtual Reality (VR) allows multiple distributed users getting together in shared virtual environments to socially interact and/or collaborate. This article explores the applicability and potential of Social VR in the broadcast sector, focusing on a live TV show use case. For such a purpose, a novel and lightweight Social VR platform is introduced. The platform provides three key outstanding features compared to state-of-the-art solutions. First, it allows a real-time integration of remote users in shared virtual environments, using realistic volumetric representations and affordable capturing systems, thus not relying on the use of synthetic avatars. Second, it supports a seamless and rich integration of heterogeneous media formats, including 3D scenarios, dynamic volumetric representation of users and (live/stored) stereoscopic 2D and 180º/360º videos. Third, it enables low-latency interaction between the volumetric users and a video-based presenter (Chroma keying), and a dynamic control of the media playout to adapt to the session’s evolution. The production process of an immersive TV show to be able to evaluate the experience is also described. On the one hand, the results from objective tests show the satisfactory performance of the platform. On the other hand, the promising results from user tests support the potential impact of the presented platform, opening up new opportunities in the broadcast sector, among others.This work has been partially funded by the European Union’s Horizon 2020 program, under agreement nº 762111 (VRTogether project), and partially by ACCIÓ, under agreement COMRDI18-1-0008 (ViVIM project). Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – Incorporación grant (reference IJCI-2017-34611). The authors would also like to thank the EU H2020 VRTogether project consortium for their relevant and valuable contributions.Peer ReviewedPostprint (author's final draft
    corecore