4,374 research outputs found

    Scalable playback rate control in P2P live streaming systems

    Get PDF
    Current commercial live video streaming systems are based either on a typical client–server (cloud) or on a peer-to-peer (P2P) architecture. The former architecture is preferred for stability and QoS, provided that the system is not stretched beyond its bandwidth capacity, while the latter is scalable with small bandwidth and management cost. In this paper, we propose a P2P live streaming architecture in which by adapting dynamically the playback rate we guarantee that peers receive the stream even in cases where the total upload bandwidth changes very abruptly. In order to achieve this we develop a scalable mechanism that by probing only a small subset of peers monitors dynamically the total available bandwidth resources and a playback rate control mechanism that dynamically adapts playback rate to the aforementioned resources. We model analytically the relationship between the playback rate and the available bandwidth resources by using difference equations and in this way we are able to apply a control theoretical approach. We also quantify monitoring inaccuracies and dynamic bandwidth changes and we calculate dynamically, as a function of these, the maximum playback rate for which the proposed system able to guarantee the uninterrupted and complete distribution of the stream. Finally, we evaluate the control strategy and the theoretical model in a packet level simulator of a complete P2P live streaming system that we designed in OPNET Modeler. Our evaluation results show the uninterrupted and complete stream delivery (every peer receives more than 99 % of video blocks in every scenario) even in very adverse bandwidth changes

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet

    Get PDF
    Peer-to-Peer (P2P) live media streaming is an emerging technology that reduces the barrier to stream live events over the Internet. However, providing a high quality media stream using P2P overlay networks is challenging and gives raise to a number of issues: (i) how to guarantee quality of the service (QoS) in the presence of dynamism, (ii) how to incentivize nodes to participate in media distribution, (iii) how to avoid bottlenecks in the overlay, and (iv) how to deal with nodes that reside behind Network Address Translators gateways (NATs). In this thesis, we answer the above research questions in form of new algorithms and systems. First of all, we address problems (i) and (ii) by presenting our P2P live media streaming solutions: Sepidar, which is a multiple-tree overlay, and GLive, which is a mesh overlay. In both models, nodes with higher upload bandwidth are positioned closer to the media source. This structure reduces the playback latency and increases the playback continuity at nodes, and also incentivizes the nodes to provide more upload bandwidth. We use a reputation model to improve participating nodes in media distribution in Sepidar and GLive. In both systems, nodes audit the behaviour of their directly connected nodes by getting feedback from other nodes. Nodes who upload more of the stream get a relatively higher reputation, and proportionally higher quality streams. To construct our streaming overlay, we present a distributed market model inspired by Bertsekas auction algorithm, although our model does not rely on a central server with global knowledge. In our model, each node has only partial information about the system. Nodes acquire knowledge of the system by sampling nodes using the Gradient overlay, where it facilitates the discovery of nodes with similar upload bandwidth. We address the bottlenecks problem, problem (iii), by presenting CLive that satisfies real-time constraints on delay between the generation of the stream and its actual delivery to users. We resolve this problem by borrowing some resources (helpers) from the cloud, upon need. In our approach, helpers are added on demand to the overlay, to increase the amount of total available bandwidth, thus increasing the probability of receiving the video on time. As the use of cloud resources costs money, we model the problem as the minimization of the economical cost, provided that a set of constraints on QoS is satisfied. Finally, we solve the NAT problem, problem (iv), by presenting two NAT-aware peer sampling services (PSS): Gozar and Croupier. Traditional gossip-based PSS breaks down, where a high percentage of nodes are behind NATs. We overcome this problem in Gozar using one-hop relaying to communicate with the nodes behind NATs. Croupier similarly implements a gossip-based PSS, but without the use of relaying

    A novel P2P and cloud computing hybrid architecture for multimedia streaming QoS cost functions

    Full text link
    Since its appearance, peer-to-peer technology has given raise to various multimedia streaming applications. Today, cloud computing offers different service models as a base for successful end user applications. In this paper we propose joining peer-to-peer and cloud computing into new architectural realization of a distributed cloud computing network for multimedia streaming, in a both centralized and peer-to-peer distributed manner. This architecture merges private and public clouds and it is intended for a commercial use, but in the same time scalable to offer the possibility of non-profitable use. In order to take advantage of the cloud paradigm and make multimedia streaming more efficient, we introduce APIs in the cloud, containing build-in functions for automatic QoS calculation, which permits negotiating QoS parameters such as bandwidth, jitter and latency, among a cloud service provider and its potential clients

    Adaptive Streaming in P2P Live Video Systems: A Distributed Rate Control Approach

    Get PDF
    Dynamic Adaptive Streaming over HTTP (DASH) is a recently proposed standard that offers different versions of the same media content to adapt the delivery process over the Internet to dynamic bandwidth fluctuations and different user device capabilities. The peer-to-peer (P2P) paradigm for video streaming allows to leverage the cooperation among peers, guaranteeing to serve every video request with increased scalability and reduced cost. We propose to combine these two approaches in a P2P-DASH architecture, exploiting the potentiality of both. The new platform is made of several swarms, and a different DASH representation is streamed within each of them; unlike client-server DASH architectures, where each client autonomously selects which version to download according to current network conditions and to its device resources, we put forth a new rate control strategy implemented at peer site to maintain a good viewing quality to the local user and to simultaneously guarantee the successful operation of the P2P swarms. The effectiveness of the solution is demonstrated through simulation and it indicates that the P2P-DASH platform is able to warrant its users a very good performance, much more satisfying than in a conventional P2P environment where DASH is not employed. Through a comparison with a reference DASH system modeled via the Integer Linear Programming (ILP) approach, the new system is shown to outperform such reference architecture. To further validate the proposal, both in terms of robustness and scalability, system behavior is investigated in the critical condition of a flash crowd, showing that the strong upsurge of new users can be successfully revealed and gradually accommodated.Comment: 12 pages, 17 figures, this work has been submitted to the IEEE journal on selected Area in Communication

    How much can large-scale video-on-demand benefit from users' cooperation?

    Get PDF
    We propose an analytical framework to tightly characterize the scaling laws for the additional bandwidth that servers must supply to guarantee perfect service in peer-assisted Video-on-Demand systems, taking into account essential aspects such as peer churn, bandwidth heterogeneity, and Zipf-like video popularity. Our results reveal that the catalog size and the content popularity distribution have a huge effect on the system performance. We show that users' cooperation can effectively reduce the servers' burden for a wide range of system parameters, confirming to be an attractive solution to limit the costs incurred by content providers as the system scales to large populations of user

    Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    Get PDF
    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends

    How much can large-scale Video-On-Demand benefit from users’ cooperation?

    Get PDF
    We propose an analytical framework to tightly characterize the scaling laws for the additional bandwidth that servers must supply to guarantee perfect service in peer-assisted Video-on-Demand systems, taking into account essential aspects such as peer churn, bandwidth heterogeneity, and Zipf-like video popularity. Our results reveal that the catalog size and the content popularity distribution have a huge effect on the system performance. We show that users' cooperation can effectively reduce the servers' burden for a wide range of system parameters, confirming to be an attractive solution to limit the costs incurred by content providers as the system scales to large populations of users
    corecore