17 research outputs found

    A network paradigm for very high capacity mobile and fixed telecommunications ecosystem sustainable evolution

    Full text link
    For very high capacity networks (VHC), the main objective is to improve the quality of the end-user experience. This implies compliance with key performance indicators (KPIs) required by applications. Key performance indicators at the application level are throughput, download time, round trip time, and video delay. They depend on the end-to-end connection between the server and the end-user device. For VHC networks, Telco operators must provide the required application quality. Moreover, they must meet the objectives of economic sustainability. Today, Telco operators rarely achieve the above objectives, mainly due to the push to increase the bit-rate of access networks without considering the end-to-end KPIs of the applications. The main contribution of this paper concerns the definition of a deployment framework to address performance and cost issues for VHC networks. We show three actions on which it is necessary to focus. First, limiting bit-rate through video compression. Second, contain the rate of packet loss through artificial intelligence algorithms for line stabilization. Third, reduce latency (i.e., round-trip time) with edge-cloud computing. The concerted and gradual application of these measures can allow a Telco to get out of the ultra-broadband "trap" of the access network, as defined in the paper. We propose to work on end-to-end optimization of the bandwidth utilization ratio. This leads to a better performance experienced by the end-user. It also allows a Telco operator to create new business models and obtain new revenue streams at a sustainable cost. To give a clear example, we describe how to realize mobile virtual and augmented reality, which is one of the most challenging future services.Comment: 42 pages, 4 tables, 6 figures. v2: Revised Englis

    Towards enabling cross-layer information sharing to improve today's content delivery systems

    Get PDF
    Content is omnipresent and without content the Internet would not be what it is today. End users consume content throughout the day, from checking the latest news on Twitter in the morning, to streaming music in the background (while working), to streaming movies or playing online games in the evening, and to using apps (e.g., sleep trackers) even while we sleep in the night. All of these different kinds of content have very specific and different requirements on a transport—on one end, online gaming often requires a low latency connection but needs little throughput, and, on the other, streaming a video requires high throughput, but it performs quite poorly under packet loss. Yet, all content is transferred opaquely over the same transport, adhering to a strict separation of network layers. Even a modern transport protocol such as Multi-Path TCP, which is capable of utilizing multiple paths, cannot take the (above) requirements or needs of that content into account for its path selection. In this work we challenge the layer separation and show that sharing information across the layers is beneficial for consuming web and video content. To this end, we created an event-based simulator for evaluating how applications can make informed decisions about which interfaces to use delivering different content based on a set of pre-defined policies that encode the (performance) requirements or needs of that content. Our policies achieve speedups of a factor of two in 20% of our cases, have benefits in more than 50%, and create no overhead in any of the cases. For video content we created a full streaming system that allows an even finer grained information sharing between the transport and the application. Our streaming system, called VOXEL, enables applications to select dynamically and on a frame granularity which video data to transfer based on the current network conditions. VOXEL drastically reduces video stalls in the 90th-percentile by up to 97% while not sacrificing the stream's visual fidelity. We confirmed our performance improvements in a real-user study where 84% of the participants clearly preferred watching videos streamed with VOXEL over the state-of-the-art.Inhalte sind allgegenwärtig und ohne Inhalte wäre das Internet nicht das, was es heute ist. Endbenutzer konsumieren Inhalte von früh bis spät - es beginnt am Morgen mit dem Lesen der neusten Nachrichten auf Twitter, dem online hören von Musik während der Arbeit, wird fortgeführt mit dem Schauen von Filmen über Online-Streaming Dienste oder dem spielen von Mehrspieler Online Spielen am Abend, und sogar dem, mit dem Internet synchronisierten, Überwachens des eigenen Schlafes in der Nacht. All diese verschiedenen Arten von Inhalten haben sehr spezifische und unterschiedliche Ansprüche an den Transport über das Internet - auf der einen Seite sind es Online Spiele, die eine sehr geringe Latenz, aber kaum Durchsatz benötigen, auf der Anderen gibt es Video-Streaming Dienste, die einen sehr hohen Datendurchsatz benötigen, aber, sehr nur schlecht mit Paketverlust umgehen können. Jedoch werden all diese Inhalte über den selben, undurchsichtigen, Transportweg übertragen, weil an eine strikte Unterteilung der Netzwerk- und Transportschicht festgehalten wird. Sogar ein modernes Übertragungsprotokoll wie MPTCP, welches es ermöglicht mehrere Netzwerkpfade zu nutzen, kann die (oben genannten) Anforderungen oder Bedürfnisse des Inhaltes, nicht für die Pfadselektierung, in Betracht ziehen. In dieser Arbeit fordern wir die Trennung der Schichten heraus und zeigen, dass ein Informationsaustausch zwischen den Netzwerkschichten von großem Vorteil für das Konsumieren von Webseiten und Video Inhalten sein kann. Hierzu haben wir einen Ereignisorientierten Simulator entwickelt, mit dem wir untersuchten wie Applikationen eine informierte Entscheidung darüber treffen können, welche Netzwerkschnittstellen für verschiedene Inhalte, basierend auf vordefinierten Regeln, welche die Leistungsvorgaben oder Bedürfnisse eines Inhalts kodieren, benutzt werden sollen. Unsere Regeln erreichen eine Verbesserung um einen Faktor von Zwei in 20% unserer Testfälle, haben einen Vorteil in mehr als 50% der Fälle und erzeugen in keinem Fall einen Mehraufwand. Für Video Inhalte haben wir ein komplettes Video-Streaming System entwickelt, welches einen noch feingranulareren Informationsaustausch zwischen der Applikation und des Transportes ermöglicht. Unser, VOXEL genanntes, System ermöglicht es Applikationen dynamisch und auf Videobild Granularität zu bestimmen welche Videodaten, entsprechend der aktuellen Netzwerksituation, übertragen werden sollen. VOXEL kann das stehenbleiben von Videos im 90%-Perzentil drastisch, um bis zu 97%, reduzieren, ohne dabei die visuelle Qualität des übertragenen Videos zu beeinträchtigen. Wir haben unsere Leistungsverbesserung in einer Studie mit echten Benutzern bestätigt, bei der 84% der Befragten es, im vergleich zum aktuellen Stand der Technik, klar bevorzugten Videos zu schauen, die über VOXEL übertragen wurden

    QoE-Aware Resource Allocation For Crowdsourced Live Streaming: A Machine Learning Approach

    Get PDF
    In the last decade, empowered by the technological advancements of mobile devices and the revolution of wireless mobile network access, the world has witnessed an explosion in crowdsourced live streaming. Ensuring a stable high-quality playback experience is compulsory to maximize the viewers’ Quality of Experience and the content providers’ profits. This can be achieved by advocating a geo-distributed cloud infrastructure to allocate the multimedia resources as close as possible to viewers, in order to minimize the access delay and video stalls. Additionally, because of the instability of network condition and the heterogeneity of the end-users capabilities, transcoding the original video into multiple bitrates is required. Video transcoding is a computationally expensive process, where generally a single cloud instance needs to be reserved to produce one single video bitrate representation. On demand renting of resources or inadequate resources reservation may cause delay of the video playback or serving the viewers with a lower quality. On the other hand, if resources provisioning is much higher than the required, the extra resources will be wasted. In this thesis, we introduce a prediction-driven resource allocation framework, to maximize the QoE of viewers and minimize the resources allocation cost. First, by exploiting the viewers’ locations available in our unique dataset, we implement a machine learning model to predict the viewers’ number near each geo-distributed cloud site. Second, based on the predicted results that showed to be close to the actual values, we formulate an optimization problem to proactively allocate resources at the viewers’ proximity. Additionally, we will present a trade-off between the video access delay and the cost of resource allocation. Considering the complexity and infeasibility of our offline optimization to respond to the volume of viewing requests in real-time, we further extend our work, by introducing a resources forecasting and reservation framework for geo-distributed cloud sites. First, we formulate an offline optimization problem to allocate transcoding resources at the viewers’ proximity, while creating a tradeoff between the network cost and viewers QoE. Second, based on the optimizer resource allocation decisions on historical live videos, we create our time series datasets containing historical records of the optimal resources needed at each geo-distributed cloud site. Finally, we adopt machine learning to build our distributed time series forecasting models to proactively forecast the exact needed transcoding resources ahead of time at each geo-distributed cloud site. The results showed that the predicted number of transcoding resources needed in each cloud site is close to the optimal number of transcoding resources

    Teenustele orienteeritud ja tõendite-teadlik mobiilne pilvearvutus

    Get PDF
    Arvutiteaduses on kaks kõige suuremat jõudu: mobiili- ja pilvearvutus. Kui pilvetehnoloogia pakub kasutajale keerukate ülesannete lahendamiseks salvestus- ning arvutusplatvormi, siis nutitelefon võimaldab lihtsamate ülesannete lahendamist mistahes asukohas ja mistahes ajal. Täpsemalt on mobiilseadmetel võimalik pilve võimalusi ära kasutades energiat säästa ning jagu saada kasvavast jõudluse ja ruumi vajadusest. Sellest tulenevalt on käesoleva töö peamiseks küsimuseks kuidas tuua pilveinfrastruktuur mobiilikasutajale lähemale? Antud töös uurisime kuidas mobiiltelefoni pilveteenust saab mobiilirakendustesse integreerida. Saime teada, et töö delegeerimine pilve eeldab mitmete pilve aspektide kaalumist ja integreerimist, nagu näiteks ressursimahukas töötlemine, asünkroonne suhtlus kliendiga, programmaatiline ressursside varustamine (Web APIs) ja pilvedevaheline kommunikatsioon. Nende puuduste ületamiseks lõime Mobiilse pilve vahevara Mobile Cloud Middleware (Mobile Cloud Middleware - MCM) raamistiku, mis kasutab deklaratiivset teenuste komponeerimist, et delegeerida töid mobiililt mitmetele pilvedele kasutades minimaalset andmeedastust. Teisest küljest on näidatud, et koodi teisaldamine on peamisi strateegiaid seadme energiatarbimise vähendamiseks ning jõudluse suurendamiseks. Sellegipoolest on koodi teisaldamisel miinuseid, mis takistavad selle laialdast kasutuselevõttu. Selles töös uurime lisaks, mis takistab koodi mahalaadimise kasutuselevõttu ja pakume lahendusena välja raamistiku EMCO, mis kogub seadmetelt infot koodi jooksutamise kohta erinevates kontekstides. Neid andmeid analüüsides teeb EMCO kindlaks, mis on sobivad tingimused koodi maha laadimiseks. Võrreldes kogutud andmeid, suudab EMCO järeldada, millal tuleks mahalaadimine teostada. EMCO modelleerib kogutud andmeid jaotuse määra järgi lokaalsete- ning pilvejuhtude korral. Neid jaotusi võrreldes tuletab EMCO täpsed atribuudid, mille korral mobiilirakendus peaks koodi maha laadima. Võrreldes EMCO-t teiste nüüdisaegsete mahalaadimisraamistikega, tõuseb EMCO efektiivsuse poolest esile. Lõpuks uurisime kuidas arvutuste maha laadimist ära kasutada, et täiustada kasutaja kogemust pideval mobiilirakenduse kasutamisel. Meie peamiseks motivatsiooniks, et sellist adaptiivset tööde täitmise kiirendamist pakkuda, on tagada kasutuskvaliteet (QoE), mis muutub vastavalt kasutajale, aidates seeläbi suurendada mobiilirakenduse eluiga.Mobile and cloud computing are two of the biggest forces in computer science. While the cloud provides to the user the ubiquitous computational and storage platform to process any complex tasks, the smartphone grants to the user the mobility features to process simple tasks, anytime and anywhere. Smartphones, driven by their need for processing power, storage space and energy saving are looking towards remote cloud infrastructure in order to solve these problems. As a result, the main research question of this work is how to bring the cloud infrastructure closer to the mobile user? In this thesis, we investigated how mobile cloud services can be integrated within the mobile apps. We found out that outsourcing a task to cloud requires to integrate and consider multiple aspects of the clouds, such as resource-intensive processing, asynchronous communication with the client, programmatically provisioning of resources (Web APIs) and cloud intercommunication. Hence, we proposed a Mobile Cloud Middleware (MCM) framework that uses declarative service composition to outsource tasks from the mobile to multiple clouds with minimal data transfer. On the other hand, it has been demonstrated that computational offloading is a key strategy to extend the battery life of the device and improves the performance of the mobile apps. We also investigated the issues that prevent the adoption of computational offloading, and proposed a framework, namely Evidence-aware Mobile Computational Offloading (EMCO), which uses a community of devices to capture all the possible context of code execution as evidence. By analyzing the evidence, EMCO aims to determine the suitable conditions to offload. EMCO models the evidence in terms of distributions rates for both local and remote cases. By comparing those distributions, EMCO infers the right properties to offload. EMCO shows to be more effective in comparison with other computational offloading frameworks explored in the state of the art. Finally, we investigated how computational offloading can be utilized to enhance the perception that the user has towards an app. Our main motivation behind accelerating the perception at multiple response time levels is to provide adaptive quality-of-experience (QoE), which can be used as mean of engagement strategy that increases the lifetime of a mobile app
    corecore