585 research outputs found

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    A Stable Fountain Code Mechanism for Peer-to-Peer Content Distribution

    Full text link
    Most peer-to-peer content distribution systems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content sharing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation.Comment: accepted to IEEE INFOCOM 2014, 9 page

    Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet

    Get PDF
    Peer-to-Peer (P2P) live media streaming is an emerging technology that reduces the barrier to stream live events over the Internet. However, providing a high quality media stream using P2P overlay networks is challenging and gives raise to a number of issues: (i) how to guarantee quality of the service (QoS) in the presence of dynamism, (ii) how to incentivize nodes to participate in media distribution, (iii) how to avoid bottlenecks in the overlay, and (iv) how to deal with nodes that reside behind Network Address Translators gateways (NATs). In this thesis, we answer the above research questions in form of new algorithms and systems. First of all, we address problems (i) and (ii) by presenting our P2P live media streaming solutions: Sepidar, which is a multiple-tree overlay, and GLive, which is a mesh overlay. In both models, nodes with higher upload bandwidth are positioned closer to the media source. This structure reduces the playback latency and increases the playback continuity at nodes, and also incentivizes the nodes to provide more upload bandwidth. We use a reputation model to improve participating nodes in media distribution in Sepidar and GLive. In both systems, nodes audit the behaviour of their directly connected nodes by getting feedback from other nodes. Nodes who upload more of the stream get a relatively higher reputation, and proportionally higher quality streams. To construct our streaming overlay, we present a distributed market model inspired by Bertsekas auction algorithm, although our model does not rely on a central server with global knowledge. In our model, each node has only partial information about the system. Nodes acquire knowledge of the system by sampling nodes using the Gradient overlay, where it facilitates the discovery of nodes with similar upload bandwidth. We address the bottlenecks problem, problem (iii), by presenting CLive that satisfies real-time constraints on delay between the generation of the stream and its actual delivery to users. We resolve this problem by borrowing some resources (helpers) from the cloud, upon need. In our approach, helpers are added on demand to the overlay, to increase the amount of total available bandwidth, thus increasing the probability of receiving the video on time. As the use of cloud resources costs money, we model the problem as the minimization of the economical cost, provided that a set of constraints on QoS is satisfied. Finally, we solve the NAT problem, problem (iv), by presenting two NAT-aware peer sampling services (PSS): Gozar and Croupier. Traditional gossip-based PSS breaks down, where a high percentage of nodes are behind NATs. We overcome this problem in Gozar using one-hop relaying to communicate with the nodes behind NATs. Croupier similarly implements a gossip-based PSS, but without the use of relaying

    Cost-effective low-delay cloud video conferencing

    Get PDF
    The cloud computing paradigm has been advocated in recent video conferencing system design, which exploits the rich on-demand resources spanning multiple geographic regions of a distributed cloud, for better conferencing experience. A typical architectural design in cloud environment is to create video conferencing agents, i.e., virtual machines, in each cloud site, assign users to the agents, and enable inter-user communication through the agents. Given the diversity of devices and network connectivities of the users, the agents may also transcode the conferencing streams to the best formats and bitrates. In this architecture, two key issues exist on how to effectively assign users to agents and how to identify the best agent to perform a transcoding task, which are nontrivial due to the following: (1) the existing proximity-based assignment may not be optimal in terms of inter-user delay, which fails to consider the whereabouts of the other users in a conferencing session; (2) the agents may have heterogeneous bandwidth and processing availability, such that the best transcoding agents should be carefully identified, for cost minimization while best serving all the users requiring the transcoded streams. To address these challenges, we formulate the user-to-agent assignment and transcoding-agent selection problems, which targets at minimizing the operational cost of the conferencing provider while keeping the conferencing delay low. The optimization problem is combinatorial in nature and difficult to solve. Using Markov approximation framework, we design a decentralized algorithm that provably converges to a bounded neighborhood of the optimal solution. An agent ranking scheme is also proposed to properly initialize our algorithm so as to improve its convergence. The results from a prototype system implementation show that our design in a set of Internet-scale scenarios reduces the operational cost by 77% as compared to a commonly-adopted alternative, while simultaneously yielding lower conferencing delays.published_or_final_versio

    iPASS: Incentivized Peer-Assisted System for Asynchronous Streaming

    Full text link

    Peer Selection in Peer-to-Peer Streaming Systems

    Get PDF
    One important task of any peer-to-peer streaming system (p2p-ss) is how to choose which peers should connect to which peers. How well a p2p-ss perform this task greatly influences its performance. This thesis explores how different peer selection algorithms affect the performance of such systems. A framework for doing the comparisons of peer selection algorithms is built on top of the network simulator ns2, making it possible to later extend the simulations with new peer selection algorithms, congestion control algorithms, wireless networks, cross traffic and other. However, ns2 is a low-level simulator, hence limiting the number of peers in the simulations, because CPU-resources are limited. The simulations are limited to single-layered streams. We find that a centralized selection method, which utilizes knowledge of bandwidth capacities and routing in the network, greatly outperforms both simple random selection of peers, and selection of close peers. Even though centralized selection does not scale well, and is therefore only applicable for a limited number of peers, this shows there is much room for improvement over basic strategies

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco
    corecore