14 research outputs found

    Small degree BitTorrent

    Get PDF
    It is well-known that the BitTorrent file sharing protocol is responsible for a significant portion of the Internet traffic. A large amount of work has been devoted to reducing the footprint of the protocol in terms of the amount of traffic, however, its flow level footprint has not been studied in depth. We argue in this paper that the large amount of flows that a BitTorrent client maintains will not scale over a certain point. To solve this problem, we first examine the flow structure through realistic simulations. We find that only a few TCP connections are used frequently for data transfer, while most of the connections are used mostly for signaling. This makes it possible to separate the data and signaling paths. We propose that, as the signaling traffic provides little overhead, it should be transferred on a separate dedicated small degree overlay while the data traffic should utilize temporal TCP sockets active only during the data transfer. Through simulation we show that this separation has no significant effect on the performance of the BitTorrent protocol while we can drastically reduce the number of actual flows

    A stochastic epidemiological model and a deterministic limit for BitTorrent-like peer-to-peer file-sharing networks

    Get PDF
    In this paper, we propose a stochastic model for a file-sharing peer-to-peer network which resembles the popular BitTorrent system: large files are split into chunks and a peer can download or swap from another peer only one chunk at a time. We prove that the fluid limits of a scaled Markov model of this system are of the coagulation form, special cases of which are well-known epidemiological (SIR) models. In addition, Lyapunov stability and settling-time results are explored. We derive conditions under which the BitTorrent incentives under consideration result in shorter mean file-acquisition times for peers compared to client-server (single chunk) systems. Finally, a diffusion approximation is given and some open questions are discussed.Comment: 25 pages, 6 figure

    Fast download but eternal seeding: The reward and punishment of Sharing Ratio Enforcement

    Get PDF
    Many private BitTorrent communities employ Sharing Ratio Enforcement (SRE) schemes to incentivize users to contribute their upload resources. It has been demonstrated that communities that use SRE are greatly oversupplied, i.e., they have much higher seeder-to-leecher ratios than communities in which SRE is not employed. The first order effect of oversupply under SRE is a positive increase in the average downloading speed. However, users are forced to seed for extremely long times to maintain adequate sharing ratios to be able to start new downloads. In this paper, we propose a fluid model to study the effects of oversupply under SRE, which predicts the average downloading speed, the average seeding time, and the average upload capacity utilization for users in communities that employ SRE. We notice that the phenomenon of oversupply has two undesired negative effects: a) Peers are forced to seed for long times, even though their seeding efforts are often not very productive (in terms of low upload capacity utilization); and b) SRE discriminates against peers with low bandwidth capacities and forces them to seed for longer durations than peers with high capacities. To alleviate these problems, we propose four different strategies for SRE, which have been inspired by ideas in social sciences and economics. We evaluate these strategies through simulations. Our results indicate that these new strategies release users from needlessly long seeding durations, while also being fair towards peers with low capacities and maintaining high system-wide downloading speeds. © 2011 IEEE

    Efficient Content Distribution With Managed Swarms

    Full text link
    Content distribution has become increasingly important as people have become more reliant on Internet services to provide large multimedia content. Efficiently distributing content is a complex and difficult problem: large content libraries are often distributed across many physical hosts, and each host has its own bandwidth and storage constraints. Peer-to-peer and peer-assisted download systems further complicate content distribution. By contributing their own bandwidth, end users can improve overall performance and reduce load on servers, but end users have their own motivations and incentives that are not necessarily aligned with those of content distributors. Consequently, existing content distributors either opt to serve content exclusively from hosts under their direct control, and thus neglect the large pool of resources that end users can offer, or they allow end users to contribute bandwidth at the expense of sacrificing complete control over available resources. This thesis introduces a new approach to content distribution that achieves high performance for distributing bulk content, based on managed swarms. Managed swarms efficiently allocate bandwidth from origin servers, in-network caches, and end users to achieve system-wide performance objectives. Managed swarming systems are characterized by the presence of a logically centralized coordinator that maintains a global view of the system and directs hosts toward an efficient use of bandwidth. The coordinator allocates bandwidth from each host based on empirical measurements of swarm behavior combined with a new model of swarm dynamics. The new model enables the coordinator to predict how swarms will respond to changes in bandwidth based on past measurements of their performance. In this thesis, we focus on the global objective of maximizing download bandwidth across end users in the system. To that end, we introduce two algorithms that the coordinator can use to compute efficient allocations of bandwidth for each host that result in high download speeds for clients. We have implemented a scalable coordinator that uses these algorithms to maximize system-wide aggregate bandwidth. The coordinator actively measures swarm dynamics and uses the data to calculate, for each host, a bandwidth allocation among the swarms competing for the host's bandwidth. Extensive simulations and a live deployment show that managed swarms significantly outperform centralized distribution services as well as completely decentralized peer-to-peer systems

    BitTorrent under a microscope : towards static QoS provision in dynamic peer-to-peer networks

    Full text link
    For peer-to-peer (P2P) networks continually to flourish, QoS provision is critical. However, the P2P networks are notoriously dynamic and heterogeneous. As a result, QoS provision in P2P networks is a challenging task with nodes of the varying and intermittent throughput. This raises a fundamental problem: is stable and delicate QoS provision achievable in the highly dynamic and heterogeneous P2P networks? In this work, we investigate BitTorrent (BT) with the particular interest in its QoS performance in the highly dynamic and heterogeneous network. Our contributions are two-fold. First, we develop an analytical model to examine a randomly selected BT node under a microscope. Based on the model, we study the mean and variance of nodal download rate in the dynamic network and the performance of BT in QoS provision under different levels of peer churns. Our analysis unveils that although BT strives to provide nodes with guaranteed throughput, due to the network dynamics, the download rates of the peers oscillate extraordinarily and can hardly converge to the target QoS as proposed in previous literature. Second, to improve the QoS provision, we propose an enhanced protocol incorporating with BT. The proposed protocol enables nodes to quickly and elaborately search their uploaders, and as a result, achieve guaranteed and stable QoS in the dynamic networks. Using both analysis and simulations, we validate the effectiveness of the proposed protocol in comparisons with the original BT

    On Flash Crowd Performance of Peer-Assisted File Distribution

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Stochastic Analysis of Self-Sustainability in Peer-Assisted VoDSystems

    Get PDF
    Abstract—We consider a peer-assisted Video-on-demand system, in which video distribution is supported both by peers caching the whole video and by peers concurrently downloading it. We propose a stochastic fluid framework that allows to characterize the additional bandwidth requested from the servers to satisfy all users watching a given video. We obtain analytical upper bounds to the server bandwidth needed in the case in which users download the video content sequentially. We also present a methodology to obtain exact solutions for special cases of peer upload bandwidth distribution. Our bounds permit to tightly characterize the performance of peer-assisted VoD systems as the number of users increases, for both sequential and nonsequential delivery schemes. In particular, we rigorously prove that the simple sequential scheme is asymptotically optimal both in the bandwidth surplus and in the bandwidth deficit mode, and that peer-assisted systems become totally self-sustaining in the surplus mode as the number of users grows large. I

    Large-scale Experiments on Cluster

    Get PDF
    Evaluation of large-scale network systems and applications is usually done in one of three ways: simulations, real deployment on Internet, or on an emulated network testbed such as a cluster. Simulations can study very large systems but often abstract out many practical details, whereas real world tests are often quite small, on the order of a few hundred nodes at most, but have very realistic conditions. Clusters and other dedicated testbeds offer a middle ground between the two: large systems with real application code. They also typically allow configuring the testbed to enable repeatable experiments. In this paper we explore how to run large BitTorrent experiments in a cluster setup. We have chosen BitTorrent because the source code is available and it has been a popular target for research. In this thesis, we first give a detailed anatomy on BiTorrent system, such as its basic components, logical architecture, key data structures, internal mechanisms and implementations. We illustrate how this system works by splitting the whole distribution process into small scenarios. Then we performed a series of experiments on our cluster with different combination of parameters in order to gain a better understanding of the system performance. We made our initial try in discussing 'How to design a rational experiment' formally. This issue did not receive as much attention as it should in the previous research work. Our contribution is two-fold. First, we show how to tweak and configure the BitTorrent client to allow for a maximum number of clients to be run on a single machine, without running into any physical limits of the machine. Second, our results show that the behavior of BitTorrent can be very sensitive to the configuration and we re-visit some existing BitTorrent research and consider the implications of our findings on previously published results. As we show in this paper, BitTorrent can change its behavior in subtle ways which are sometimes ignored in published works
    corecore