28 research outputs found

    Entrega de conteúdos multimédia em over-the-top: caso de estudo das gravações automáticas

    Get PDF
    Doutoramento em Engenharia EletrotécnicaOver-The-Top (OTT) multimedia delivery is a very appealing approach for providing ubiquitous, exible, and globally accessible services capable of low-cost and unrestrained device targeting. In spite of its appeal, the underlying delivery architecture must be carefully planned and optimized to maintain a high Qualityof- Experience (QoE) and rational resource usage, especially when migrating from services running on managed networks with established quality guarantees. To address the lack of holistic research works on OTT multimedia delivery systems, this Thesis focuses on an end-to-end optimization challenge, considering a migration use-case of a popular Catch-up TV service from managed IP Television (IPTV) networks to OTT. A global study is conducted on the importance of Catch-up TV and its impact in today's society, demonstrating the growing popularity of this time-shift service, its relevance in the multimedia landscape, and tness as an OTT migration use-case. Catch-up TV consumption logs are obtained from a Pay-TV operator's live production IPTV service containing over 1 million subscribers to characterize demand and extract insights from service utilization at a scale and scope not yet addressed in the literature. This characterization is used to build demand forecasting models relying on machine learning techniques to enable static and dynamic optimization of OTT multimedia delivery solutions, which are able to produce accurate bandwidth and storage requirements' forecasts, and may be used to achieve considerable power and cost savings whilst maintaining a high QoE. A novel caching algorithm, Most Popularly Used (MPU), is proposed, implemented, and shown to outperform established caching algorithms in both simulation and experimental scenarios. The need for accurate QoE measurements in OTT scenarios supporting HTTP Adaptive Streaming (HAS) motivates the creation of a new QoE model capable of taking into account the impact of key HAS aspects. By addressing the complete content delivery pipeline in the envisioned content-aware OTT Content Delivery Network (CDN), this Thesis demonstrates that signi cant improvements are possible in next-generation multimedia delivery solutions.A entrega de conteúdos multimédia em Over-The-Top (OTT) e uma proposta atractiva para fornecer um serviço flexível e globalmente acessível, capaz de alcançar qualquer dispositivo, com uma promessa de baixos custos. Apesar das suas vantagens, e necessario um planeamento arquitectural detalhado e optimizado para manter níveis elevados de Qualidade de Experiência (QoE), em particular aquando da migração dos serviços suportados em redes geridas com garantias de qualidade pré-estabelecidas. Para colmatar a falta de trabalhos de investigação na área de sistemas de entrega de conteúdos multimédia em OTT, esta Tese foca-se na optimização destas soluções como um todo, partindo do caso de uso de migração de um serviço popular de Gravações Automáticas suportado em redes de Televisão sobre IP (IPTV) geridas, para um cenário de entrega em OTT. Um estudo global para aferir a importância das Gravações Automáticas revela a sua relevância no panorama de serviços multimédia e a sua adequação enquanto caso de uso de migração para cenários OTT. São obtidos registos de consumos de um serviço de produção de Gravações Automáticas, representando mais de 1 milhão de assinantes, para caracterizar e extrair informação de consumos numa escala e âmbito não contemplados ate a data na literatura. Esta caracterização e utilizada para construir modelos de previsão de carga, tirando partido de sistemas de machine learning, que permitem optimizações estáticas e dinâmicas dos sistemas de entrega de conteúdos em OTT através de previsões das necessidades de largura de banda e armazenamento, potenciando ganhos significativos em consumo energético e custos. Um novo mecanismo de caching, Most Popularly Used (MPU), demonstra um desempenho superior as soluções de referencia, quer em cenários de simulação quer experimentais. A necessidade de medição exacta da QoE em streaming adaptativo HTTP motiva a criaçao de um modelo capaz de endereçar aspectos específicos destas tecnologias adaptativas. Ao endereçar a cadeia completa de entrega através de uma arquitectura consciente dos seus conteúdos, esta Tese demonstra que são possíveis melhorias de desempenho muito significativas nas redes de entregas de conteúdos em OTT de próxima geração

    Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming

    Full text link
    Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica.Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637TESI

    Open Infrastructure for Edge Computing

    Get PDF
    Edge computing, bringing the computation closer to end-users and data producers, has now firmly gained the status of enabling technology for the new kinds of emerging applications, such as Virtual/Augmented Reality and IoT. The motivation backing this rapidly developing computing paradigm is mainly two-fold. On the one hand, the goal is to minimize the latency that end-users experience, not only improving the quality of service but empowering new kinds of applications, which would not even be possible given higher delays. On the other, edge computing aims to save core networking bandwidth from being overwhelmed by myriads of IoT devices, sending their data to the cloud. After analyzing and aggregating IoT streams at edge servers, much less networking capacity will be required to persist remaining information in distant cloud datacenters. Having a solid motivation and experiencing continuous interest from both academia and industry, edge computing is still in its nascency. To leave adolescence and take its place on a par with the cloud computing paradigm, finally forming a versatile edge-cloud environment, the newcomer needs to overcome a number of challenges. First of all, the computing infrastructure to deploy edge applications and services is very limited at the moment. Indeed, there are initiatives supported by the telecommunication industry, like Multi-access Edge Computing. Also, cloud providers plan to establish their facilities near the edge of the network. However, we believe that even more efforts will be required to make edge servers generally available. Second, to emerge and function efficiently, the ecosystem of edge computing needs practices, standards, and governance mechanisms of its own kind. The specificity originates from the highly dispersed nature of the edge, implying high heterogeneity of resources and diverse administrative control over the computing facilities. Finally, the third challenge is the dynamicity of the edge computing environment due to, e.g., varying demand, migrating clients, etc. In this thesis, we outline underlying principles of what we call Open Infrastructure for Edge (OpenIE), identify its key features, and provide solutions for them. Intended to tackle the challenges we mentioned above, OpenIE defines a set of common practices and loosely coupled technologies creating a unified environment out of highly heterogeneous and administratively partitioned edge computing resources. Particularly, we design a protocol capable of discovering edge providers on a global scale. Further, we propose a framework of Ingelligent Containers (ICONs), capable of autonomous decision making and forming a service overlay on a large-scale edge-cloud setting. As edge providers need to be economically incentivized, we devise a truthful double auction mechanism where edge providers can meet application owners or administrators in need of deploying an edge service. Due to truthfulness, in our auction, it is the best strategy for all participants to bid one's privately known valuation (or cost), thus making complex market behavior strategies obsolete. We analyze the potential of distributed ledgers to serve for OpenIE decentralized agreement and transaction handling and show how our auction can be implemented with the help of distributed ledgers. With the key building blocks of OpenIE, mentioned above, we hope to make an entrance for anyone interested in service provisioning at the edge as easy as possible. We hope that with the emergence of independent edge providers, edge computing will finally become pervasive.Reunalaskenta, joka tuo laskentakapasiteettia lähemmäksi loppukäyttäjiä ja datan tuottajia, on noussut uudentyyppisten sovelluksien, kuten virtuaalisen ja lisätyn todellisuuden (VR/AR) sekä esineiden internetin (IoT) keskeiseksi mahdollistajaksi. Reunalaskennan kehitystä tukevat pääosin kaksi sen tuomaa etua. Ensiksi, reunalaskenta minimoi loppukäyttäjien kokemaa latenssia mahdollistaen uudentyyppisiä sovelluksia. Toiseksi, reunalaskenta säästää ydinverkon tiedonsiirtokapasiteettia, esimerkiksi IoT-laitteiden pilveen lähettämien tietojen osalta. Kun reunapalvelimet analysoivat ja aggregoivat IoT-virrat, verkkokapasiteettia tarvitaan paljon vähemmän. Reunalaskentaan on panostettu paljon, sekä teollisuuden, että tutkimuksen osalta. Reunalaskennan kehittymispolulla monipuoliseksi reunapilviympäristöksi on edessä useita haasteita. Ensinnäkin laskentakapasiteetti tietoverkkojen reunalla on tällä hetkellä hyvin rajallinen. Vaikka teleoperaattorit ja pilvipalvelujen tarjoajat suunnittelevat lisäävänsä laskentakapasiteettia reunalaskennan tarpeisiin, uskomme kuitenkin, että enemmän ponnisteluja tarvitaan, jotta reunalaskennan edut olisivat yleisesti saatavilla. Toiseksi, toimiakseen tehokkaasti, reunalaskennan ekosysteemi tarvitsee omat käytäntönsä, standardinsa ja hallintamekanisminsa. Reunalaskenan erityistarpeet johtuvat resurssien heterogeenisyydestä, niiden suuresta maantieteellisesta hajautuksesta ja hallinnollisesta jaosta. Kolmas haaste on reunalaskentaympäristön dynaamisuus, joka johtuu esimerkiksi vaihtelevasta kysynnästä ja asiakkaiden liikkuvuudesta. Tässä väitöstutkimuksessa esittelemme Avoimen Infrastruktuurin Reunalaskennalle (OpenIE), joka vastaa edellä mainittuihin haasteisiin, ja tunnistamme ongelman pääominaisuudet ja tarjoamme niihin ratkaisuja. OpenIE määrittelee joukon yleisiä käytäntöjä ja löyhästi yhdistettyjä tekniikoita, jotka luovat yhtenäisen ympäristön erittäin heterogeenisistä ja hallinnollisesti jaetuista reunalaskentaresursseista. Suunnittelemme protokollan, joka kykenee etsimään reunaoperaattoreita maailmanlaajuisesti. Lisäksi ehdotamme Älykontti (ICON) -kehystä, joka kykenee itsenäiseen päätöksentekoon ja muodostaa palvelupäällysteen laajamittaisessa reunapilviympäristössä. Koska reunaoperaattoreita on kannustettava taloudellisesti, suunnittelemme totuudenmukaisen huutokauppamekanismin, jossa reunapalveluntarjoajat voivat kohdata sovellusten omistajia tai järjestelmien omistajia, jotka tarvitsevat reunalaskentakapasiteettia. Totuudenmukaisessa huutokaupassa paras strategia kaikille osallistujille on tehdä tarjous yksityisesti tunnetun arvostuksen perusteella, mikä tekee monimutkaisen markkinastrategian kehittämisen tarpeettomaksi. Analysoimme lohkoketjualustojen potentiaalia palvella OpenIE:n hajautetun sopimisen ja tapahtumien käsittelyä ja näytämme, miten huutokauppamme voidaan toteuttaa lohkoketjuteknologia hyödyntäen. Edellä mainittujen OpenIE:n keskeisten kompponenttien avulla pyrimme luomaan yleisiä puitteita joiden avulla jokainen reunalaskennan kapasiteetin tarjoamisesta kiinnostunut taho voisi ryhtyä palveluntarjojaksi helposti. Riippumattomien reunapalveluntarjoajien mukaantulo tekisi reunalaskennan lupaamat hyödyt yleisesti saataviksi

    Codage réseau pour des applications multimédias avancées

    Get PDF
    Network coding is a paradigm that allows an efficient use of the capacity of communication networks. It maximizes the throughput in a multi-hop multicast communication and reduces the delay. In this thesis, we focus our attention to the integration of the network coding framework to multimedia applications, and in particular to advanced systems that provide enhanced video services to the users. Our contributions concern several instances of advanced multimedia communications: an efficient framework for transmission of a live stream making joint use of network coding and multiple description coding; a novel transmission strategy for lossy wireless networks that guarantees a trade-off between loss resilience and short delay based on a rate-distortion optimized scheduling of the video frames, that we also extended to the case of interactive multi-view streaming; a distributed social caching system that, using network coding in conjunction with the knowledge of the users' preferences in terms of views, is able to select a replication scheme such that to provide a high video quality by accessing only other members of the social group without incurring the access cost associated with a connection to a central server and without exchanging large tables of metadata to keep track of the replicated parts; and, finally, a study on using blind source separation techniques to reduce the overhead incurred by network coding schemes based on error-detecting techniques such as parity coding and message digest generation. All our contributions are aimed at using network coding to enhance the quality of video transmission in terms of distortion and delay perceivedLe codage réseau est un paradigme qui permet une utilisation efficace du réseau. Il maximise le débit dans un réseau multi-saut en multicast et réduit le retard. Dans cette thèse, nous concentrons notre attention sur l’intégration du codage réseau aux applications multimédias, et en particulier aux systèmes avancès qui fournissent un service vidéo amélioré pour les utilisateurs. Nos contributions concernent plusieurs scénarios : un cadre de fonctions efficace pour la transmission de flux en directe qui utilise à la fois le codage réseau et le codage par description multiple, une nouvelle stratégie de transmission pour les réseaux sans fil avec perte qui garantit un compromis entre la résilience vis-à-vis des perte et la reduction du retard sur la base d’une optimisation débit-distorsion de l'ordonnancement des images vidéo, que nous avons également étendu au cas du streaming multi-vue interactive, un système replication sociale distribuée qui, en utilisant le réseau codage en relation et la connaissance des préférences des utilisateurs en termes de vue, est en mesure de sélectionner un schéma de réplication capable de fournir une vidéo de haute qualité en accédant seulement aux autres membres du groupe social, sans encourir le coût d’accès associé à une connexion à un serveur central et sans échanger des larges tables de métadonnées pour tenir trace des éléments répliqués, et, finalement, une étude sur l’utilisation de techniques de séparation aveugle de source -pour réduire l’overhead encouru par les schémas de codage réseau- basé sur des techniques de détection d’erreur telles que le codage de parité et la génération de message digest

    A Framework for Digital Emotions

    Get PDF
    As new media become more ubiquitous, our emotional experiences in digital space are increasing exponentially as well. While there is much talk of “affective” computing and “affective” new media art, a disconnect exists between networked emotions and the popular media that they inhabit. This research presents a theoretical framework for assessing “digital emotions”—a term that describes the feedback process between digital technologies and the body with respect to short, networked inscriptions of emotion and the (re)experience of those inscriptions within the body and through digital space. Digital emotions display five basic characteristics that can be applied to a variety of media environments: (1) They describe a process of feedback that link short, emotive inscriptions in digital environments to users and their (re)experiences of those inscriptions; (2) This feedback process includes, but is not limited to, the inscriber, the medium, and the receiver and the emotive experience fuels the initial connectivity and any further connectivity; (3) The emotional value varies depending on the media, the community of users, and the aesthetic experience of the digital emotion; (4) Digital emotions influence our emotional repertoire by normalizing our paradigm scenarios; and (5) They are highly malleable based on changes in technologies and their ability to both expand and contract emotional experiences in real time. The core characteristics of digital emotions are applied to three broad and overlapping categories: technology, community, and aesthetic experience. Each of these aspects of digital emotions work together, yet they exist along the massive spectrum of our online, emotional experiences—from our casual click of the “like” button to digital community artworks. Applied to digital spaces along this spectrum, digital emotions illuminate the feedback process that occurs between the media, the network, and the environment. The framework ultimately suggests that the process of digital emotions explicates emotions experiences that could only occur in digital space and are therefore unique to digital culture

    Use of Inferential Statistics to Design Effective Communication Protocols for Wireless Sensor Networks

    Get PDF
    This thesis explores the issues and techniques associated with employing the principles of inferential statistics to design effective Medium Access Control (MAC), routing and duty cycle management strategies for multihop Wireless Sensor Networks (WSNs). The main objective of these protocols are to maximise the throughput of the network, to prolong the lifetime of nodes and to reduce the end-to-end delay of packets over a general network scenario without particular considerations for specific topology configurations, traffic patterns or routing policies. WSNs represent one of the leading-edge technologies that have received substantial research efforts due to their prominent roles in many applications. However, to design effective communication protocols for WSNs is particularly challenging due to the scarce resources of these networks and the requirement for large-scale deployment. The MAC, routing and duty cycle management protocols are amongst the important strategies that are required to ensure correct operations of WSNs. This thesis makes use of the inferential statistics field to design these protocols; inferential statistics was selected as it provides a rich design space with powerful approaches and methods. The MAC protocol proposed in this thesis exploits the statistical characteristics of the Gamma distribution to enable each node to adjust its contention parameters dynamically based on its inference for the channel occupancy. This technique reduces the service time of packets and leverages the throughput by improving the channel utilisation. Reducing the service time minimises the energy consumed in contention to access the channel which in turn prolongs the lifetime of nodes. The proposed duty cycle management scheme uses non-parametric Bayesian inference to enable each node to determine the best times and durations for its sleeping durations without posing overheads on the network. Hence the lifetime of node is prolonged by mitigating the amount of energy wasted in overhearing and idle listening. Prolonging the lifetime of nodes increases the throughput of the network and reduces the end-to-end delay as it allows nodes to route their packets over optimal paths for longer periods. The proposed routing protocol uses one of the state-of-the-art inference techniques dubbed spatial reasoning that enables each node to figure out the spatial relationships between nodes without overwhelming the network with control packets. As a result, the end-to-end delay is reduced while the throughput and lifetime are increased. Besides the proposed protocols, this thesis utilises the analytical aspects of statistics to develop rigorous analytical models that can accurately predict the queuing and medium access delay and energy consumption over multihop networks. Moreover, this thesis provides a broader perspective for design of communication protocols for WSNs by casting the operations of these networks in the domains of the artificial chemistry discipline and the harmony search optimisation algorithm
    corecore