604 research outputs found

    vSkyConf: Cloud-assisted Multi-party Mobile Video Conferencing

    Get PDF
    As an important application in the busy world today, mobile video conferencing facilitates virtual face-to-face communication with friends, families and colleagues, via their mobile devices on the move. However, how to provision high-quality, multi-party video conferencing experiences over mobile devices is still an open challenge. The fundamental reason behind is the lack of computation and communication capacities on the mobile devices, to scale to large conferencing sessions. In this paper, we present vSkyConf, a cloud-assisted mobile video conferencing system to fundamentally improve the quality and scale of multi-party mobile video conferencing. By novelly employing a surrogate virtual machine in the cloud for each mobile user, we allow fully scalable communication among the conference participants via their surrogates, rather than directly. The surrogates exchange conferencing streams among each other, transcode the streams to the most appropriate bit rates, and buffer the streams for the most efficient delivery to the mobile recipients. A fully decentralized, optimal algorithm is designed to decide the best paths of streams and the most suitable surrogates for video transcoding along the paths, such that the limited bandwidth is fully utilized to deliver streams of the highest possible quality to the mobile recipients. We also carefully tailor a buffering mechanism on each surrogate to cooperate with optimal stream distribution. We have implemented vSkyConf based on Amazon EC2 and verified the excellent performance of our design, as compared to the widely adopted unicast solutions.Comment: 10 page

    Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet

    Get PDF
    Peer-to-Peer (P2P) live media streaming is an emerging technology that reduces the barrier to stream live events over the Internet. However, providing a high quality media stream using P2P overlay networks is challenging and gives raise to a number of issues: (i) how to guarantee quality of the service (QoS) in the presence of dynamism, (ii) how to incentivize nodes to participate in media distribution, (iii) how to avoid bottlenecks in the overlay, and (iv) how to deal with nodes that reside behind Network Address Translators gateways (NATs). In this thesis, we answer the above research questions in form of new algorithms and systems. First of all, we address problems (i) and (ii) by presenting our P2P live media streaming solutions: Sepidar, which is a multiple-tree overlay, and GLive, which is a mesh overlay. In both models, nodes with higher upload bandwidth are positioned closer to the media source. This structure reduces the playback latency and increases the playback continuity at nodes, and also incentivizes the nodes to provide more upload bandwidth. We use a reputation model to improve participating nodes in media distribution in Sepidar and GLive. In both systems, nodes audit the behaviour of their directly connected nodes by getting feedback from other nodes. Nodes who upload more of the stream get a relatively higher reputation, and proportionally higher quality streams. To construct our streaming overlay, we present a distributed market model inspired by Bertsekas auction algorithm, although our model does not rely on a central server with global knowledge. In our model, each node has only partial information about the system. Nodes acquire knowledge of the system by sampling nodes using the Gradient overlay, where it facilitates the discovery of nodes with similar upload bandwidth. We address the bottlenecks problem, problem (iii), by presenting CLive that satisfies real-time constraints on delay between the generation of the stream and its actual delivery to users. We resolve this problem by borrowing some resources (helpers) from the cloud, upon need. In our approach, helpers are added on demand to the overlay, to increase the amount of total available bandwidth, thus increasing the probability of receiving the video on time. As the use of cloud resources costs money, we model the problem as the minimization of the economical cost, provided that a set of constraints on QoS is satisfied. Finally, we solve the NAT problem, problem (iv), by presenting two NAT-aware peer sampling services (PSS): Gozar and Croupier. Traditional gossip-based PSS breaks down, where a high percentage of nodes are behind NATs. We overcome this problem in Gozar using one-hop relaying to communicate with the nodes behind NATs. Croupier similarly implements a gossip-based PSS, but without the use of relaying

    A novel P2P and cloud computing hybrid architecture for multimedia streaming QoS cost functions

    Full text link
    Since its appearance, peer-to-peer technology has given raise to various multimedia streaming applications. Today, cloud computing offers different service models as a base for successful end user applications. In this paper we propose joining peer-to-peer and cloud computing into new architectural realization of a distributed cloud computing network for multimedia streaming, in a both centralized and peer-to-peer distributed manner. This architecture merges private and public clouds and it is intended for a commercial use, but in the same time scalable to offer the possibility of non-profitable use. In order to take advantage of the cloud paradigm and make multimedia streaming more efficient, we introduce APIs in the cloud, containing build-in functions for automatic QoS calculation, which permits negotiating QoS parameters such as bandwidth, jitter and latency, among a cloud service provider and its potential clients

    An Optimized AMS Based Cloud Downloading Service with Advanced Caching and Intelligent Data Distribution Mechanism

    Get PDF
    The popularity of peer-to-peer video content downloading has surged due to diverse content availability and convenient sharing among users. However, scaling systems to accommodate the growing number of users and content items poses a challenge. This research aims to optimize video content downloading in peer-to-peer systems. The objective is to improve performance by developing advanced caching mechanisms, an intelligent data distribution algorithm, and efficient bandwidth resource management. The proposed approach involves implementing innovative caching mechanisms that store frequently accessed content closer to users, reducing download time. An intelligent data distribution algorithm minimizes bottlenecks and maximizes download speeds. Efficient bandwidth resource management ensures fair allocation. Results demonstrate significant enhancements in download time and overall system performance, leading to improved user experience. This research addresses the need for an optimized video content downloading system to handle increasing user and content volumes. The findings hold the potential to enhance user experiences, facilitate seamless video sharing, and advance peer-to-peer video content downloading

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    DecVi: Adaptive Video Conferencing on Open Peer-to-Peer Networks

    Full text link
    Video conferencing has become the preferred way of interacting virtually. Current video conferencing applications, like Zoom, Teams or WebEx, are centralized, cloud-based platforms whose performance crucially depends on the proximity of clients to their data centers. Clients from low-income countries are particularly affected as most data centers from major cloud providers are located in economically advanced nations. Centralized conferencing applications also suffer from occasional outages and are embattled by serious privacy violation allegations. In recent years, decentralized video conferencing applications built over p2p networks and incentivized through blockchain are becoming popular. A key characteristic of these networks is their openness: anyone can host a media server on the network and gain reward for providing service. Strong economic incentives combined with lower entry barrier to join the network, makes increasing server coverage to even remote regions of the world. These reasons, however, also lead to a security problem: a server may obfuscate its true location in order to gain an unfair business advantage. In this paper, we consider the problem of multicast tree construction for video conferencing sessions in open p2p conferencing applications. We propose DecVi, a decentralized multicast tree construction protocol that adaptively discovers efficient tree structures based on an exploration-exploitation framework. DecVi is motivated by the combinatorial multi-armed bandit problem and uses a succinct learning model to compute effective actions. Despite operating in a multi-agent setting with each server having only limited knowledge of the global network and without cooperation among servers, experimentally we show DecVi achieves similar quality-of-experience compared to a centralized globally optimal algorithm while achieving higher reliability and flexibility

    On service optimization in community network micro-clouds

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i KTH Royal Institute of TechnologyInternet coverage in the world is still weak and local communities are required to come together and build their own network infrastructures. People collaborate for the common goal of accessing the Internet and cloud services by building Community networks (CNs). The use of Internet cloud services has grown over the last decade. Community network cloud infrastructures (i.e. micro-clouds) have been introduced to run services inside the network, without the need to consume them from the Internet. CN micro-clouds aims for not only an improved service performance, but also an entry point for an alternative to Internet cloud services in CNs. However, the adaptation of the services to be used in CN micro-clouds have their own challenges since the use of low-capacity devices and wireless connections without a central management is predominant in CNs. Further, large and irregular topology of the network, high software and hardware diversity and different service requirements in CNs, makes the CN micro-clouds a challenging environment to run local services, and to achieve service performance and quality similar to Internet cloud services. In this thesis, our main objective is the optimization of services (performance, quality) in CN micro-clouds, facilitating entrance to other services and motivating members to make use of CN micro-cloud services as an alternative to Internet services. We present an approach to handle services in CN micro-cloud environments in order to improve service performance and quality that can be approximated to Internet services, while also giving to the community motivation to use CN micro-cloud services. Furthermore, we break the problem into different levels (resource, service and middleware), propose a model that provides improvements for each level and contribute with information that helps to support the improvements (in terms of service performance and quality) in the other levels. At the resource level, we facilitate the use of community devices by utilizing virtualization techniques that isolate and manage CN micro-cloud services in order to have a multi-purpose environment that fosters services in the CN micro-cloud environment. At the service level, we build a monitoring tool tailored for CN micro-clouds that helps us to analyze service behavior and performance in CN micro-clouds. Subsequently, the information gathered enables adaptation of the services to the environment in order to improve their quality and performance under CN environments. At the middleware level, we build overlay networks as the main communication system according to the social information in order to improve paths and routes of the nodes, and improve transmission of data across the network by utilizing the relationships already established in the social network or community of practices that are related to the CNs. Therefore, service performance in CN micro-clouds can become more stable with respect to resource usage, performance and user perceived quality.Acceder a Internet sigue siendo un reto en muchas partes del mundo y las comunidades locales se ven en la necesidad de colaborar para construir sus propias infraestructuras de red. Los usuarios colaboran por el objetivo común de acceder a Internet y a los servicios en la nube construyendo redes comunitarias (RC). El uso de servicios de Internet en la nube ha crecido durante la última década. Las infraestructuras de nube en redes comunitarias (i.e., micronubes) han aparecido para albergar servicios dentro de las mismas redes, sin tener que acceder a Internet para usarlos. Las micronubes de las RC no solo tienen por objetivo ofrecer un mejor rendimiento, sino también ser la puerta de entrada en las RC hacia una alternativa a los servicios de Internet en la nube. Sin embargo, la adaptación de los servicios para ser usados en micronubes de RC conlleva sus retos ya que el uso de dispositivos de recursos limitados y de conexiones inalámbricas sin una gestión centralizada predominan en las RC. Más aún, la amplia e irregular topología de la red, la diversidad en el hardware y el software y los diferentes requisitos de los servicios en RC convierten en un desafío albergar servicios locales en micronubes de RC y obtener un rendimiento y una calidad del servicio comparables a los servicios de Internet en la nube. Esta tesis tiene por objetivo la optimización de servicios (rendimiento, calidad) en micronubes de RC, facilitando la entrada a otros servicios y motivando a sus miembros a usar los servicios en la micronube de RC como una alternativa a los servicios en Internet. Presentamos una aproximación para gestionar los servicios en entornos de micronube de RC para mejorar su rendimiento y calidad comparable a los servicios en Internet, a la vez que proporcionamos a la comunidad motivación para usar los servicios de micronube en RC. Además, dividimos el problema en distintos niveles (recursos, servicios y middleware), proponemos un modelo que proporciona mejoras para cada nivel y contribuye con información que apoya las mejoras (en términos de rendimiento y calidad de los servicios) en los otros niveles. En el nivel de los recursos, facilitamos el uso de dispositivos comunitarios al emplear técnicas de virtualización que aíslan y gestionan los servicios en micronubes de RC para obtener un entorno multipropósito que fomenta los servicios en el entorno de micronube de RC. En el nivel de servicio, construimos una herramienta de monitorización a la medida de las micronubes de RC que nos ayuda a analizar el comportamiento de los servicios y su rendimiento en micronubes de RC. Luego, la información recopilada permite adaptar los servicios al entorno para mejorar su calidad y rendimiento bajo las condiciones de una RC. En el nivel de middleware, construimos redes de overlay que actúan como el sistema de comunicación principal de acuerdo a información social para mejorar los caminos y las rutas de los nodos y mejoramos la transmisión de datos a lo largo de la red al utilizar las relaciones preestablecidas en la red social o la comunidad de prácticas que están relacionadas con las RC. De este modo, el rendimiento en las micronubes de RC puede devenir más estable respecto al uso de recursos, el rendimiento y la calidad percibidas por el usuario.Postprint (published version

    A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity

    Full text link
    Due to missing IP multicast support on an Internet scale, over-the-top media streams are delivered with the help of overlays as used by content delivery networks and their peer-to-peer (P2P) extensions. In this context, mesh/pull-based swarming plays an important role either as pure streaming approach or in combination with tree/push mechanisms. However, the impact of realistic client populations with heterogeneous resources is not yet fully understood. In this technical report, we contribute to closing this gap by mathematically analysing the most basic scheduling mechanisms latest deadline first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain framework and combining them into a simple, yet powerful, mixed strategy to leverage inherent differences in client resources. The main contributions are twofold: (1) a mathematical framework for swarming on random graphs is proposed with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed strategy, named SchedMix, is proposed that leverages peer heterogeneity. The proposed strategy, SchedMix is shown to outperform the other two strategies using different abstractions: a mean-field theoretic analysis of buffer probabilities, simulations of a stochastic model on random graphs, and a full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to http://ieeexplore.ieee.org/document/7497234

    A cost-efficient QoS-aware analytical model of future software content delivery networks

    Get PDF
    Freelance, part-time, work-at-home, and other flexible jobs are changing the concept of workplace, and bringing information and content exchange problems to companies. Geographically spread corporations may use remote distribution of software and data to attend employees' demands, by exploiting emerging delivery technologies. In this context, cost-efficient software distribution is crucial to allow business evolution and make IT infrastructures more agile. On the other hand, container based virtualization technology is shaping the new trends of software deployment and infrastructure design. We envision current and future enterprise IT management trends evolving towards container based software delivery over Hybrid CDNs. This paper presents a novel cost-efficient QoS aware analytical model and a Hybrid CDN-P2P architecture for enterprise software distribution. The model would allow delivery cost minimization for a wide range of companies, from big multinationals to SMEs, using CDN-P2P distribution under various industrial hypothetical scenarios. Model constraints guarantee acceptable deployment times and keep interchanged content amounts below the bandwidth and storage network limits in our scenarios. Indeed, key model parameters account for network bandwidth, storage limits and rental prices, which are empirically determined from their offered values by the commercial delivery networks KeyCDN, MaxCDN, CDN77 and BunnyCDN. This preliminary study indicates that MaxCDN offers the best cost-QoS trade-off. The model is implemented in the network simulation tool PeerSim, and then applied to diverse testing scenarios by varying company types, number and profile (either, technical or administrative) of employees and the number and size of content requests. Hybrid simulation results show overall economic savings between 5\% and 20\%, compared to just hiring resources from a commercial CDN, while guaranteeing satisfactory QoS levels in terms of deployment times and number of served requests.This work was partially supported by Generalitat de Catalunya under the SGR Program (2017-SGR-962) and the RIS3CAT DRAC Project (001-P-001723). We have also received funding from Ministry of Science and Innovation (Spain) under the project EQC2019-005653-P.Peer ReviewedPostprint (author's final draft
    corecore