30 research outputs found

    Cloud media video encoding:review and challenges

    Get PDF
    In recent years, Internet traffic patterns have been changing. Most of the traffic demand by end users is multimedia, in particular, video streaming accounts for over 53%. This demand has led to improved network infrastructures and computing architectures to meet the challenges of delivering these multimedia services while maintaining an adequate quality of experience. Focusing on the preparation and adequacy of multimedia content for broadcasting, Cloud and Edge Computing infrastructures have been and will be crucial to offer high and ultra-high definition multimedia content in live, real-time, or video-on-demand scenarios. For these reasons, this review paper presents a detailed study of research papers related to encoding and transcoding techniques in cloud computing environments. It begins by discussing the evolution of streaming and the importance of the encoding process, with a focus on the latest streaming methods and codecs. Then, it examines the role of cloud systems in multimedia environments and provides details on the cloud infrastructure for media scenarios. After doing a systematic literature review, we have been able to find 49 valid papers that meet the requirements specified in the research questions. Each paper has been analyzed and classified according to several criteria, besides to inspect their relevance. To conclude this review, we have identified and elaborated on several challenges and open research issues associated with the development of video codecs optimized for diverse factors within both cloud and edge architectures. Additionally, we have discussed emerging challenges in designing new cloud/edge architectures aimed at more efficient delivery of media traffic. This involves investigating ways to improve the overall performance, reliability, and resource utilization of architectures that support the transmission of multimedia content over both cloud and edge computing environments ensuring a good quality of experience for the final user

    Avoimen lähdekoodin HEVC-pilvitranskoodausjärjestelmä

    Get PDF
    Kyky tallentaa valtavia määriä videokuvaa vaatii helppokäyttöisiä ja tehokkaita videonkoodausjärjestelmiä, joiden avulla voidaan mukautua rajallisiin lähetys- ja tallennuskapasiteetteihin. Tässä työssä esitellään avoimen lähdekoodin pilvipalvelu, jonka avulla pystyy transkoodaamaan videoita H.265/HEVC-formaattiin. Vaihtoehtoisia kaupallisia toteutuksia on saatavilla, mutta ne ovat maksumuurien takana. Komentorivikäyttöliittymien käyttäminen taas vaatii syvällistä ymmärtämistä pakkausprosessista parhaan mahdollisen laadun ja nopeuden saavuttamiseksi. Esitelty järjestelmä on helppokäyttöinen, avoimen lähdekoodin toteutus, joka tekee siitä helposti lähestyttävän myös ei-teknisesti valveutuneille käyttäjille. Se on rakennettu käyttämällä FFmpeg-multimediatyökalua, jonka avulla pystyy dekoodaamaan paljon erilaisia sisääntuloja sekä Kvazaar-nimistä HEVC-koodainta videonpakkaukseen

    EdgeMORE: improving resource allocation with multiple options from tenants

    Get PDF
    International audienceUnder the paradigm of Edge Computing (EC), a Network Operator (NO) deploys computational resources at the network edge and let third-party Service Providers (SPs) run on top of them, as tenants. Besides the clear advantages for SPs and final users thanks to the vicinity of computation nodes, a NO aims to allocate edge resources in order to increase its own utility, including bandwidth saving, operational cost reduction, QoE for its users, etc. However, while the number of third-party services competing for edge resources is expected to dramatically grow, the resources deployed cannot increase accordingly, due to physical limitations. Therefore, smart strategies are needed to fully exploit the potential of EC, despite its constrains. To this aim, we propose to leverage service adaptability, a dimension that has mainly been neglected so far: each service can adapt to the amount of resources that the NO has allocated to it, balancing the fraction of service computation performed at the edge and relying on remote servers, e.g., in the Cloud, for the rest. We propose EdgeMORE, a resource allocation strategy in which SPs express their capabilities to adapt to different resource constraints, by declaring the different configurations under which they are able to run, specifying the resources needed and the utility provided to the NO. The NO then chooses the most convenient option per each SP, in order to maximize the total utility. We formalize EdgeMORE as a Integer Linear Program. We show via simulation that EdgeMORE greatly improves EC utility with respect to the standard where no multiple options for running services are allowed

    A Survey on Energy Consumption and Environmental Impact of Video Streaming

    Full text link
    Climate change challenges require a notable decrease in worldwide greenhouse gas (GHG) emissions across technology sectors. Digital technologies, especially video streaming, accounting for most Internet traffic, make no exception. Video streaming demand increases with remote working, multimedia communication services (e.g., WhatsApp, Skype), video streaming content (e.g., YouTube, Netflix), video resolution (4K/8K, 50 fps/60 fps), and multi-view video, making energy consumption and environmental footprint critical. This survey contributes to a better understanding of sustainable and efficient video streaming technologies by providing insights into the state-of-the-art and potential future directions for researchers, developers, and engineers, service providers, hosting platforms, and consumers. We widen this survey's focus on content provisioning and content consumption based on the observation that continuously active network equipment underneath video streaming consumes substantial energy independent of the transmitted data type. We propose a taxonomy of factors that affect the energy consumption in video streaming, such as encoding schemes, resource requirements, storage, content retrieval, decoding, and display. We identify notable weaknesses in video streaming that require further research for improved energy efficiency: (1) fixed bitrate ladders in HTTP live streaming; (2) inefficient hardware utilization of existing video players; (3) lack of comprehensive open energy measurement dataset covering various device types and coding parameters for reproducible research

    Aproximaciones en la preparación de contenido de vídeo para la transmisión de vídeo bajo demanda (VOD) con DASH

    Get PDF
    El consumo de contenido multimedia a través de Internet, especialmente el vídeo, está experimentado un crecimiento constante, convirtiéndose en una actividad cotidiana entre individuos de todo el mundo. En este contexto, en los últimos años se han desarrollado numerosos estudios enfocados en la preparación, distribución y transmisión de contenido multimedia, especialmente en el ámbito del vídeo bajo demanda (VoD). Esta tesis propone diferentes contribuciones en el campo de la codificación de vídeo para VoD que será transmitido usando el estándar Dynamic Adaptive Streaming over HTTP (DASH). El objetivo es encontrar un equilibrio entre el uso eficiente de recursos computacionales y la garantía de ofrecer una calidad experiencia (QoE) alta para el espectador final. Como punto de partida, se ofrece un estudio exhaustivo sobre investigaciones relacionadas con técnicas de codificación y transcodificación de vídeo en la nube, enfocándose especialmente en la evolución del streaming y la relevancia del proceso de codificación. Además, se examinan las propuestas en función del tipo de virtualización y modalidades de entrega de contenido. Se desarrollan dos enfoques de codificación adaptativa basada en la calidad, con el objetivo de ajustar la calidad de toda la secuencia de vídeo a un nivel deseado. Los resultados indican que las soluciones propuestas pueden reducir el tamaño del vídeo manteniendo la misma calidad a lo largo de todos los segmentos del vídeo. Además, se propone una solución de codificación basada en escenas y se analiza el impacto de utilizar vídeo a baja resolución (downscaling) para detectar escenas en términos de tiempo, calidad y tamaño. Los resultados muestran que se reduce el tiempo total de codificación, el consumo de recursos computacionales y el tamaño del vídeo codificado. La investigación también presenta una arquitectura que paraleliza los trabajos involucrados en la preparación de contenido DASH utilizando el paradigma FaaS (Function-as-a-Service), en una plataforma serverless. Se prueba esta arquitectura con tres funciones encapsuladas en contenedores, para codificar y analizar la calidad de los vídeos, obteniendo resultados prometedores en términos de escalabilidad y distribución de trabajos. Finalmente, se crea una herramienta llamada VQMTK, que integra 14 métricas de calidad de vídeo en un contenedor con Docker, facilitando la evaluación de la calidad del vídeo en diversos entornos. Esta herramienta puede ser de gran utilidad en el ámbito de la codificación de vídeo, en la generación de conjuntos de datos para entrenar redes neuronales profundas y en entornos científicos como educativos. En resumen, la tesis ofrece soluciones y herramientas innovadoras para mejorar la eficiencia y la calidad en la preparación y transmisión de contenido multimedia en la nube, proporcionando una base sólida para futuras investigaciones y desarrollos en este campo que está en constante evolución.The consumption of multimedia content over the Internet, especially video, is growing steadily, becoming a daily activity among people around the world. In this context, several studies have been developed in recent years focused on the preparation, distribution, and transmission of multimedia content, especially in the field of video on demand (VoD). This thesis proposes different contributions in the field of video coding for transmission in VoD scenarios using Dynamic Adaptive Streaming over HTTP (DASH) standard. The goal is to find a balance between the efficient use of computational resources and the guarantee of delivering a high-quality experience (QoE) for the end viewer. As a starting point, a comprehensive survey on research related to video encoding and transcoding techniques in the cloud is provided, focusing especially on the evolution of streaming and the relevance of the encoding process. In addition, proposals are examined as a function of the type of virtualization and content delivery modalities. Two quality-based adaptive coding approaches are developed with the objective of adjusting the quality of the entire video sequence to a desired level. The results indicate that the proposed solutions can reduce the video size while maintaining the same quality throughout all video segments. In addition, a scene-based coding solution is proposed and the impact of using downscaling video to detect scenes in terms of time, quality and size is analyzed. The results show that the required encoding time, computational resource consumption and the size of the encoded video are reduced. The research also presents an architecture that parallelizes the jobs involved in content preparation using the FaaS (Function-as-a-Service) paradigm, on a serverless platform. This architecture is tested with three functions encapsulated in containers, to encode and analyze the quality of the videos, obtaining promising results in terms of scalability and job distribution. Finally, a tool called VQMTK is developed, which integrates 14 video quality metrics in a container with Docker, facilitating the evaluation of video quality in various environments. This tool can be of great use in the field of video coding, in the generation of datasets to train deep neural networks, and in scientific environments such as educational. In summary, the thesis offers innovative solutions and tools to improve efficiency and quality in the preparation and transmission of multimedia content in the cloud, providing a solid foundation for future research and development in this constantly evolving field

    Building a Framework for High-performance In-memory Message-Oriented Middleware

    Get PDF
    Message-Oriented Middleware (MOM) is a popular class of software used in many distributed applications, ranging from business systems and social networks to gaming and streaming media services. As workloads continue to grow both in terms of the number of users and the amount of content, modern MOM systems face increasing demands in terms of performance and scalability. Recent advances in networking such as Remote Direct Memory Access (RDMA) offer a more efficient data transfer mechanism compared to traditional kernel-level socket networking used by existing widely-used MOM systems. Unfortunately, RDMA’s complex interface has made it difficult for MOM systems to utilize its capabilities. In this thesis, we introduce a framework called RocketBufs, which provides abstractions and interfaces for constructing high-performance MOM systems. Applications implemented using RocketBufs produce and consume data using regions of memory called buffers while the framework is responsible for transmitting, receiving and synchronizing buffer access. RocketBufs’ buffer abstraction is designed to work efficiently with different transport protocols, allowing messages to be distributed using RDMA or TCP using the same APIs (i.e., by simply changing a configuration file). We demonstrate the utility and evaluate the performance of RocketBufs by using it to implement a publish/subscribe system called RBMQ. We compare it against two widely-used, industry-grade MOM systems, namely RabbitMQ and Redis. Our evaluations show that when using TCP, RBMQ achieves up to 1.9 times higher messaging throughput than RabbitMQ, a message queuing system with an equivalent flow control scheme. When RDMA is used, RBMQ shows significant gains in messaging throughput (up to 3.7 times higher than RabbitMQ and up to 1.7 times higher than Redis), as well as reductions in median delivery latency (up to 81% lower than RabbitMQ and 47% lower than Redis). In addition, on RBMQ subscriber hosts configured to use RDMA, data transfers occur with negligible CPU overhead regardless of the amount of data being transferred. This allows CPU resources to be used for other purposes like processing data. To further demonstrate the flexibility of RocketBufs, we use it to build a live streaming video application by integrating RocketBufs into a web server to receive disseminated video data. When compared with the same application built with Redis, the RocketBufs-based dissemination host achieves live streaming throughput up to 73% higher while disseminating data, and the RocketBufs-based web server shows a reduction of up to 95% in CPU utilization, allowing for up to 55% more concurrent viewers to be serviced
    corecore