7 research outputs found

    Proxy Support for HTTP Adaptive Streaming

    Get PDF
    Not long ago streaming video over the Internet included only short clips of low quality video. Now the possibilities seem endless as professional productions are made available in high definition. This explosion of growth is the result of several factors, such as increasing network performance, advancements in video encoding technology, improvements to video streaming techniques, and a growing number of devices capable of handling video. However, despite the improvements to Internet video streaming this paradigm is still evolving. HTTP adaptive streaming involves encoding a video at multiple quality levels then dividing those quality levels into small chunks. The player can then determine which quality level to retrieve the next chunk from in order to optimize video playback when considering the underlying network conditions. This thesis first presents an experimental framework that allows for adaptive streaming players to be analyzed and evaluated. Evaluation is beneficial because there are several concerns with the adaptive video streaming ecosystem such as achieving a high video playback quality while also ensuring stable playback quality. The primary contribution of this thesis is the evaluation of prefetching by a proxy server as a means to improve streaming performance. This work considers an implementation of a proxy server that is functional with the extremely popular Netflix streaming service, and it is evaluated using two Netflix players. The results show its potential to improve video streaming performance in several scenarios. It effectively increases the buffer capacity of the player as chunks can be prefetched in advance of the player's request then stored on the proxy to be quickly delivered once requested. This allows for degradation in network conditions to be hidden from the player while the proxy serves prefetched data, preventing a reduction to the video quality as a result of an overreaction by the player. Further, the proxy can reduce the impact of the bottleneck in the network, achieving higher throughput by utilizing parallel connections to the server

    Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming

    Full text link
    Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica.Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637TESI

    Entrega de conteúdos multimédia em over-the-top: caso de estudo das gravações automáticas

    Get PDF
    Doutoramento em Engenharia EletrotécnicaOver-The-Top (OTT) multimedia delivery is a very appealing approach for providing ubiquitous, exible, and globally accessible services capable of low-cost and unrestrained device targeting. In spite of its appeal, the underlying delivery architecture must be carefully planned and optimized to maintain a high Qualityof- Experience (QoE) and rational resource usage, especially when migrating from services running on managed networks with established quality guarantees. To address the lack of holistic research works on OTT multimedia delivery systems, this Thesis focuses on an end-to-end optimization challenge, considering a migration use-case of a popular Catch-up TV service from managed IP Television (IPTV) networks to OTT. A global study is conducted on the importance of Catch-up TV and its impact in today's society, demonstrating the growing popularity of this time-shift service, its relevance in the multimedia landscape, and tness as an OTT migration use-case. Catch-up TV consumption logs are obtained from a Pay-TV operator's live production IPTV service containing over 1 million subscribers to characterize demand and extract insights from service utilization at a scale and scope not yet addressed in the literature. This characterization is used to build demand forecasting models relying on machine learning techniques to enable static and dynamic optimization of OTT multimedia delivery solutions, which are able to produce accurate bandwidth and storage requirements' forecasts, and may be used to achieve considerable power and cost savings whilst maintaining a high QoE. A novel caching algorithm, Most Popularly Used (MPU), is proposed, implemented, and shown to outperform established caching algorithms in both simulation and experimental scenarios. The need for accurate QoE measurements in OTT scenarios supporting HTTP Adaptive Streaming (HAS) motivates the creation of a new QoE model capable of taking into account the impact of key HAS aspects. By addressing the complete content delivery pipeline in the envisioned content-aware OTT Content Delivery Network (CDN), this Thesis demonstrates that signi cant improvements are possible in next-generation multimedia delivery solutions.A entrega de conteúdos multimédia em Over-The-Top (OTT) e uma proposta atractiva para fornecer um serviço flexível e globalmente acessível, capaz de alcançar qualquer dispositivo, com uma promessa de baixos custos. Apesar das suas vantagens, e necessario um planeamento arquitectural detalhado e optimizado para manter níveis elevados de Qualidade de Experiência (QoE), em particular aquando da migração dos serviços suportados em redes geridas com garantias de qualidade pré-estabelecidas. Para colmatar a falta de trabalhos de investigação na área de sistemas de entrega de conteúdos multimédia em OTT, esta Tese foca-se na optimização destas soluções como um todo, partindo do caso de uso de migração de um serviço popular de Gravações Automáticas suportado em redes de Televisão sobre IP (IPTV) geridas, para um cenário de entrega em OTT. Um estudo global para aferir a importância das Gravações Automáticas revela a sua relevância no panorama de serviços multimédia e a sua adequação enquanto caso de uso de migração para cenários OTT. São obtidos registos de consumos de um serviço de produção de Gravações Automáticas, representando mais de 1 milhão de assinantes, para caracterizar e extrair informação de consumos numa escala e âmbito não contemplados ate a data na literatura. Esta caracterização e utilizada para construir modelos de previsão de carga, tirando partido de sistemas de machine learning, que permitem optimizações estáticas e dinâmicas dos sistemas de entrega de conteúdos em OTT através de previsões das necessidades de largura de banda e armazenamento, potenciando ganhos significativos em consumo energético e custos. Um novo mecanismo de caching, Most Popularly Used (MPU), demonstra um desempenho superior as soluções de referencia, quer em cenários de simulação quer experimentais. A necessidade de medição exacta da QoE em streaming adaptativo HTTP motiva a criaçao de um modelo capaz de endereçar aspectos específicos destas tecnologias adaptativas. Ao endereçar a cadeia completa de entrega através de uma arquitectura consciente dos seus conteúdos, esta Tese demonstra que são possíveis melhorias de desempenho muito significativas nas redes de entregas de conteúdos em OTT de próxima geração

    An Advanced A-V- Player to Support Scalable Personalised Interaction with Multi-Stream Video Content

    Get PDF
    PhDCurrent Audio-Video (A-V) players are limited to pausing, resuming, selecting and viewing a single video stream of a live broadcast event that is orchestrated by a professional director. The main objective of this research is to investigate how to create a new custom-built interactive A V player that enables viewers to personalise their own orchestrated views of live events from multiple simultaneous camera streams, via interacting with tracked moving objects, being able to zoom in and out of targeted objects, and being able to switch views based upon detected incidents in specific camera views. This involves research and development of a personalisation framework to create and maintain user profiles that are acquired implicitly and explicitly and modelling how this framework supports an evaluation of the effectiveness and usability of personalisation. Personalisation is considered from both an application oriented and a quality supervision oriented perspective within the proposed framework. Personalisation models can be individually or collaboratively linked with specific personalisation usage scenarios. The quality of different personalised interaction in terms of explicit evaluative metrics such as scalability and consistency can be monitored and measured using specific evaluation mechanisms.European Union's Seventh Framework Programme ([FP7/2007-2013]) under grant agreement No. ICT- 215248 and from Queen Mary University of London

    Live popular Electronic music ‘performable recordings’

    Get PDF
    This research focuses on Electronic Dance Music (EDM), or popular electronic music, and the way a band can perform live having the same sonic attributes with those of a studio production, investigating production techniques and performance practices that work with these contemporary mediatized live performances. For the purposes of this research, an Electronic Dance Music (EDM) live act has been formed including conventional instruments, such as electric guitar and keyboards, and other more sophisticated electronic devices such as midi controllers and electronic drums along with vocals. The emerging phenomenon of new types of bands or performers, who try to bring the studio sound on stage, created a gap between ‘human’ and ‘non-human’ that requires performers to work with technology in new ways, in this musical style. This thesis builds upon research on authenticity and its relation to aspects of liveness in these types of live performances. More specifically, it builds upon research on Moore’s tripartition of authenticities and the two forms of authenticity that are most salient in this process of ‘musicking’. These are the 1st and the 3rd person as described in Moore’s (2002) model. The 1 st person authenticity relates to the extent to which the participants feel that the performers engage in authentic human expression through their performance. The 3rd person authenticity relates to the participants’ assessment of what constitutes an authentic sonic example of a musical tradition or genre – in this case EDM. In addition to what it should sound like, 3rd person authenticity is also concerned with what are the appropriate ‘tools’ that should be used and factors such as the coherence between aural and visual, employment of skill, performativity and the constant awareness of a ‘standard of achievement’. The aim is to create a musical process in which all the participants feel that the band is performing authentically while being sonically faithful to the genre or tradition. The key is the combination of machine accuracy with some aspects of human expressive performance in a way that maintains the integrity of the popular electronic musical style. Following on from the multiple theories that underpin this research, various methodologies have been followed. Qualitative and quantitative research methods have been followed, through interviews, video observations, and audio data analysis. Having said that, a real-time production and performance process has been developed and is called ‘performable recordings’, that is, ‘a type of music production that enables the artist to perform a musical piece live, using, in real-time, the mixing and post-production processes that create the aesthetics of a studio produced version'. This model intends to promote and support performers' emotional expression and creativity that comes from spontaneity, musicianship, face to face performance and freedom of movement that over the past years were minimized or eliminated due to contemporary production processes and performance practices. Furthermore, it creates opportunities for performers and musicians to get involved on stage with a broader range of modern musical styles and genres
    corecore