538 research outputs found

    Study of Negative effects of Traffic Localization

    Get PDF
    Català: Les xarxes P2P s'han convertit en una important xarxa per usuaris i ISP. Els usuaris volen compartir i aprofitar aquestes noves xarxes. D'altra banda, els ISP no volen que els usuaris utilitzin de manera tan intensa les seves connexions a Internet. Això causa que els seus beneficis es redueixin. La localització de tràfic ha estat anunciat com una solució per als inconvenients del P2P. Redueix el tràfic intercanviat entre els usuaris fent clúster. Només uns pocs usuaris d'un cluster intercanviaren dades amb altres xarxes. Hi ha diversos estudis que mostren els beneficis d'aquesta mesura, però no hi ha massa estudis sobre els efectes negatius. En el nostre treball hem tractat de simular una xarxa BitTorrent. Un cop tinguem aquesta xarxa preparada simularem una xarxa que utilitzi tècniques de localització de trànsit. Fent diverses simulacions volem demostrar com la localització de trànsit afecta l'experiència dels usuaris.Castellano: Las redes P2P se han convertido en una importante red para usuarios e ISP. Los usuarios quieren compartir y aprovechar estas nuevas redes. Por otra parte, los ISP no les interesa que los usuarios utilicen de manera tan intensa sus conexiones. Esto es debido a que sus beneficios se ven reducidos. La localización de tráfico ha sido anunciada como una de las mejores soluciones para los inconvenientes del P2P. Reduce el tráfico intercambiado entre los usuarios lejanos o de diferentes redes haciendo clúster. Sólo unos pocos usuarios de un clúster van a intercambiar datos con otros usuarios de otras redes. Hay varios estudios que indican los beneficios de esta medida, pero no hay demasiados estudios acerca de los efectos negativos. En nuestro trabajo hemos tratado de simular una red BitTorrent. Una vez que tengamos esta red lista simularemos una red P2P con alguna técnica de localización de tráfico. Haciendo varias simulaciones queremos demostrar cómo la localización de tráfico afecta a la experiencia de los usuarios.English: P2P networks has become one important network for users and ISP. Users wants to share and take profit of this new networks. On the other hand, ISP don not want users' to use so intensively their internet connections because their profits are being reduced. Traffic Localization has been announced as a solution for P2P disadvantages. It reduces the traffic exchanged between users making cluster. Only a few users from one cluster are going to change data to other networks. There are several studies that indicates the benefits of this measure but there are not too much studies about negative effects. In our work we tried to simulate a BitTorrent network. Once we have this network ready we constructed it making clusters simulating some Traffic Localization technique. Making several simulations we want to prove how traffic localization affects users' experience

    Mesmerizer: A Effective Tool for a Complete Peer-to-Peer Software Development Life-cycle

    Get PDF
    In this paper we present what are, in our experience, the best practices in Peer-To-Peer(P2P) application development and how we combined them in a middleware platform called Mesmerizer. We explain how simulation is an integral part of the development process and not just an assessment tool. We then present our component-based event-driven framework for P2P application development, which can be used to execute multiple instances of the same application in a strictly controlled manner over an emulated network layer for simulation/testing, or a single application in a concurrent environment for deployment purpose. We highlight modeling aspects that are of critical importance for designing and testing P2P applications, e.g. the emulation of Network Address Translation and bandwidth dynamics. We show how our simulator scales when emulating low-level bandwidth characteristics of thousands of concurrent peers while preserving a good degree of accuracy compared to a packet-level simulator

    CLOSER: A Collaborative Locality-aware Overlay SERvice

    Get PDF
    Current Peer-to-Peer (P2P) file sharing systems make use of a considerable percentage of Internet Service Providers (ISPs) bandwidth. This paper presents the Collaborative Locality-aware Overlay SERvice (CLOSER), an architecture that aims at lessening the usage of expensive international links by exploiting traffic locality (i.e., a resource is downloaded from the inside of the ISP whenever possible). The paper proves the effectiveness of CLOSER by analysis and simulation, also comparing this architecture with existing solutions for traffic locality in P2P systems. While savings on international links can be attractive for ISPs, it is necessary to offer some features that can be of interest for users to favor a wide adoption of the application. For this reason, CLOSER also introduces a privacy module that may arouse the users' interest and encourage them to switch to the new architectur

    CORNETO: A Software System for Simulating and Optimizing Optical Networks

    Get PDF
    In this paper we present a software system that is being developed at the University of Leeds for simulating and optimizing energy efficient optical core networks. The system is called CORNETO, an acronym for CORe NETwork Optimization. The software implements many of the energy saving concepts, methods and computational heuristics that have been produced by the ongoing INTERNET, INTelligent Energy awaRe NETworks, project. The main objective of the software is to help network operators and planners green their networks while maintaining quality of service. In this paper we briefly describe the software and demonstrate its capabilities with two case studies

    Understanding the Properties of the BitTorrent Overlay

    Get PDF
    In this paper, we conduct extensive simulations to understand the properties of the overlay generated by BitTorrent. We start by analyzing how the overlay properties impact the efficiency of BitTorrent. We focus on the average peer set size (i.e., average number of neighbors), the time for a peer to reach its maximum peer set size, and the diameter of the overlay. In particular, we show that the later a peer arrives in a torrent, the longer it takes to reach its maximum peer set size. Then, we evaluate the impact of the maximum peer set size, the maximum number of outgoing connections per peer, and the number of NATed peers on the overlay properties. We show that BitTorrent generates a robust overlay, but that this overlay is not a random graph. In particular, the connectivity of a peer to its neighbors depends on its arriving order in the torrent. We also show that a large number of NATed peers significantly compromise the robustness of the overlay to attacks. Finally, we evaluate the impact of peer exchange on the overlay properties, and we show that it generates a chain-like overlay with a large diameter, which will adversely impact the efficiency of large torrents

    A Stable Fountain Code Mechanism for Peer-to-Peer Content Distribution

    Full text link
    Most peer-to-peer content distribution systems require the peers to privilege the welfare of the overall system over greedily maximizing their own utility. When downloading a file broken up into multiple pieces, peers are often asked to pass on some possible download opportunities of common pieces in order to favor rare pieces. This is to avoid the missing piece syndrome, which throttles the download rate of the peer-to-peer system to that of downloading the file straight from the server. In other situations, peers are asked to stay in the system even though they have collected all the file's pieces and have an incentive to leave right away. We propose a mechanism which allows peers to act greedily and yet stabilizes the peer-to-peer content sharing system. Our mechanism combines a fountain code at the server to generate innovative new pieces, and a prioritization for the server to deliver pieces only to new peers. While by itself, neither the fountain code nor the prioritization of new peers alone stabilizes the system, we demonstrate that their combination does, through both analytical and numerical evaluation.Comment: accepted to IEEE INFOCOM 2014, 9 page

    The state of peer-to-peer network simulators

    Get PDF
    Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate and extend existing work. We look at the landscape of simulators for research in peer-to-peer (P2P) networks by conducting a survey of a combined total of over 280 papers from before and after 2007 (the year of the last survey in this area), and comment on the large quantity of research using bespoke, closed-source simulators. We propose a set of criteria that P2P simulators should meet, and poll the P2P research community for their agreement. We aim to drive the community towards performing their experiments on simulators that allow for others to validate their results

    Mode-Suppression: A Simple, Stable and Scalable Chunk-Sharing Algorithm for P2P Networks

    Full text link
    The ability of a P2P network to scale its throughput up in proportion to the arrival rate of peers has recently been shown to be crucially dependent on the chunk sharing policy employed. Some policies can result in low frequencies of a particular chunk, known as the missing chunk syndrome, which can dramatically reduce throughput and lead to instability of the system. For instance, commonly used policies that nominally "boost" the sharing of infrequent chunks such as the well known rarest-first algorithm have been shown to be unstable. Recent efforts have largely focused on the careful design of boosting policies to mitigate this issue. We take a complementary viewpoint, and instead consider a policy that simply prevents the sharing of the most frequent chunk(s). Following terminology from statistics wherein the most frequent value in a data set is called the mode, we refer to this policy as mode-suppression. We also consider a more general version that suppresses the mode only if the mode frequency is larger than the lowest frequency by a fixed threshold. We prove the stability of mode-suppression using Lyapunov techniques, and use a Kingman bound argument to show that the total download time does not increase with peer arrival rate. We then design versions of mode-suppression that sample a small number of peers at each time, and construct noisy mode estimates by aggregating these samples over time. We show numerically that the variants of mode-suppression yield near-optimal download times, and outperform all other recently proposed chunk sharing algorithms
    corecore