10 research outputs found

    Network aware P2P multimedia streaming: capacity or locality?

    Get PDF
    P2P content providers are motivated to localize traffic within Autonomous Systems and therefore alleviate the tension with ISPs stemming from costly inter-AS traffic generated by geographically distributed P2P users. In this paper, we first present a new three-tier framework to conduct a thorough study on the impact of various capacity aware or locality aware neighbor selection and chunk scheduling strategies. Specifically, we propose a novel hybrid neighbor selection strategy with the flexibility to elect neighbors based on either type of network awareness with different probabilities. We find that network awareness in terms of both capacity and locality potentially degrades system QoS as a whole and that capacity awareness faces effort-based unfairness, but enables contribution-based fairness. Extensive simulations show that hybrid neighbor selection can not only promote traffic locality but lift streaming quality and that the crux of traffic locality promotion is active overlay construction. Based on this observation, we then propose a totally decentralized network awareness protocol, equipped with hybrid neighbor selection. In realistic simulation environments, this protocol can reduce inter-AS traffic from 95% to 38% a locality performance comparable with tracker-side strategies (35%) under the premise of high streaming quality. Our performance evaluation results provide valuable insights for both theoretical study on selfish topologies and real-deployed system design. © 2011 IEEE.published_or_final_versionThe 2011 IEEE International Conference on Peer-to-Peer Computing (P2P 2011), Kyoto, Japan, 31 August-2 September 2011. In Proceedings of P2P, 2011, p. 54-6

    BitTorrent locality and transit trafficreduction: When, why, and at what cost?

    Get PDF
    A substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Second, we go beyond individual case studies and present results for few thousands ISPs represented in our data set of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. Finally, we develop scalable methodologies that allow us to process this huge data set and derive accurate traffic matrices of torrents. Using the previous methods we obtain the following main findings: i) Although there are a large number of very small ISPs without enough resources for localizing traffic, by analyzing the 100 largest ISPs we show that Locality policies are expected to significantly reduce the transit traffic with respect to the default random overlay construction method in these ISPs; ii) contrary to the popular belief, increasing the access speed of the clients of an ISP does not necessarily help to localize more traffic; iii) by studying several real ISPs, we have shown that soft speed-aware locality policies guarantee win-win situations for ISPs and end users. Furthermore, the maximum transit traffic savings that an ISP can achieve without limiting the number of inter-ISP overlay links is bounded by “unlocalizable” torrents with few local clients. The application of restrictions in the number of inter-ISP links leads to a higher transit traffic reduction but the QoS of clients downloading “unlocalizable” torrents would be severely harmed.The research leading to these results has been partially funded by the European Union's FP7 Program under the projects eCOUSIN (318398) and TREND (257740), the Spanish Ministry of Economy and Competitiveness under the eeCONTENT project (TEC2011-29688-C02-02), and the Regional Government of Madrid under the MEDIANET Project (S2009/TIC-1468).Publicad

    Deep diving into BitTorrent locality

    Full text link
    a

    Designing incentives for peer-to-peer systems

    Get PDF
    Peer-to-peer systems, networks of egalitarian nodes without a central authority, can achieve massive scalability and fault tolerance through the pooling together of individual resources. Unfortunately, most nodes represent self-interested, or rational, parties that will attempt to maximize their consumption of shared resources while minimizing their own contributions. This constitutes a type of attack that can destabilize the system. The first contribution of this thesis is a proposed taxonomy for these rational attacks and the most common solutions used in contemporary designs to thwart them. One approach is to design the P2P system with incentives for cooperation, so that rational nodes voluntarily behave. We broadly classify these incentives as being either genuine or artificial , with the former describing incentives inherent in peer interactions, and the latter describing a secondary enforcement system. We observe that genuine incentives tend to be more robust to rational manipulations than artificial counterparts. Based on this observation, we also propose two extensions to BitTorrent, a P2P file distribution protocol. While this system is popular, accounting for approximately one-third of current Internet traffic, it has known limitations. Our extensions use genuine incentives to address some of these problems. The first extension improves seeding, an altruistic mode wherein nodes that have completed their download continue to provide upload service. We incentivize seeding by giving long-term identifiers to clients enabling seeding clients to be recognized and rewarded in subsequent downloads. Simulations demonstrate that our method is highly effective in protecting swarms from aggressive clients such as BitTyrant. Finally, we introduce The BitTorrent Anonymity Marketplace , wherein each peer simultaneously joins multiple swarms to disguise their true download intentions. Peers then trade one torrent for another, making the cover traffic valuable as a means of obtaining the real target. Thus, when a neighbor receives a request from a peer for blocks of a torrent, it does not know if the peer is really downloading that torrent, or only using it in trade. Using simulation, we demonstrate that nodes cannot determine peer intent from observed interactions

    Experimental analysis of the socio-economic phenomena in the BitTorrent ecosystem

    Get PDF
    BitTorrent is the most successful Peer-to-Peer (P2P) application and is responsible for a major portion of Internet traffic. It has been largely studied using simulations, models and real measurements. Although simulations and modelling are easier to perform, they typically simplify analysed problems and in case of BitTorrent they are likely to miss some of the effects which occur in real swarms. Thus, in this thesis we rely on real measurements. In the first part of the thesis we present the summary of measurement techniques used so far and we use it as a base to design our tools that allow us to perform different types of analysis at different resolution level. Using these tools we collect several large-scale datasets to study different aspects of BitTorrent with a special focus on socio-economic aspects. Using our datasets, we first investigate the topology of real BitTorrent swarms and how the traffic is actually exchanged among peers. Our analysis shows that the resilience of BitTorrent swarms is lower than corresponding random graphs. We also observe that ISP policies, locality-aware clients and network events (e.g., network congestion) lead to locality-biased composition of neighbourhood in the swarms. This means that the peer contains more neighbours from local provider than expected from purely random neighbours selection process. Those results are of interest to the companies which use BitTorrent for daily operations as well as for ISPs which carry BitTorrent traffic. In the next part of the thesis we look at the BitTorrent from the perspective of the content and content publishers in a major BitTorrent portals. We focus on the factors that seem to drive the popularity of the BitTorrent and, as a result, could affect its associated traffic in the Internet. We show that a small fraction of publishers (around 100 users) is responsible for more than two-thirds of the published content. Those publishers can be divided into two groups: (i) profit driven and (ii)fake publishers. The former group leverages the published copyrighted content (typically very popular) on BitTorrent portals to attract content consumers to their web sites for financial gain. Removing this group may have a significant impact on the popularity of BitTorrent portals and, as a result, may affect a big portion of the Internet traffic associated to BitTorrent. The latter group is responsible for fake content, which is mostly linked to malicious activity and creates a serious threat for the Bit- Torrent ecosystem and for the Internet in general. To mitigate this threat, in the last part of the thesis we present a new tool named TorrentGuard for the early detection of fake content that could help to significantly reduce the number of computer infections and scams suffered by BitTorrent users. This tool is available through web portal and as a plugin to Vuze, a popular BitTorrent client. Finally, we present MYPROBE, the web portal that allows to query our database and to gather different pieces of information regarding BitTorrent content publishers. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------BitTorrent es la aplicación peer-to-peer para compartición de ficheros de mayor éxito y responsable de una fracción importante del tráfico de Internet. Trabajos previos han estudiado BitTorrent usando técnicas de simulación, modelos analíticos y medidas reales. Aunque las técnicas analíticas y de simulación son más sencillas de aplicar, típicamente presentan versiones simplificadas de los problemas analizados y en el caso concreto de BitTorrent pueden obviar aspectos o interacciones fundamentales que ocurren en los enjambres de BitTorrent. Por lo tanto, en esta tesis utilizaremos como pilar de nuestra investigación técnicas de medidas reales. En primer lugar presentaremos un resumen de las técnicas de medidas usadas hasta el momento en el ámbito de BitTorrent que suponen la base teórica para el diseño de nuestras propias herramientas de medida que nos permitirán analizar enjambres reales de BitTorrent. Usando los datos obtenidos con estas herramientas estudiaremos aspectos diferentes de BitTorrent con un enfoque especial de los aspectos socioeconómicos. En la primera parte de la tesis, realizaremos un estudio detallado de la topología de los enjambres reales de BitTorrent así como de detalles acerca de las interacciones entre peers. Nuestro análisis demuestra que la resistencia de la topología de los enjambres reales de BitTorrent es menor que la ofrecida por grafos aleatorios equivalentes. Además, los resultados revelan que las políticas de los Provedores de Internet junto con la incipiente utilización de clientes de BitTorrent modificados y otros efectos en la red (p.ej. congestión) hacen que los enjambres reales de BitTorrent presentan una composicin de localidad. Es decir, un nodo tiene un número de vecinos dentro de su mismo Proveedor de Internet mayor del que obtendría en una topología puramente aleatoria. Estos resultados son de interés para las empresas que utilizan BitTorrent en sus operaciones, así como para los Provedores de Internet responsables de transportar el tráfico de BitTorrent. En la segunda parte de la tesis, analizamos los aspectos de publicación de contenido en los mayores portales de BitTorrent. En concreto, los resultados presentados muestran que sólo un pequeño grupo de publicadores (alrededor de 100) es responsable de hacer disponible más de dos tercios del contenido publicado. Además estos publicadores se pueden dividir en dos grupos: (i) aquellos con incentivos económicos y (ii) publicadores de contenido falso. El primer grupo hace disponible contenido protegido por derechos de autor (que es típicamente muy popular) en los principales portales de BitTorrent con el objetivo de atraer a los consumidores de dicho contenido a sus propios sitios web y obtener un beneficio económico. La eliminación de este grupo puede tener un impacto importante en la popularidad de los principales portales de BitTorrent así como en el tráfico generado por BitTorrent en Internet. El segundo grupo es responsable de la publicación de contenidos falsos. La mayor parte de dichos contenidos están asociados a una actividad maliciosa (p.ej. la distribución de software malicioso) y por tanto suponen una seria amenaza para el ecosistema de BitTorrent, en particular, y para Internet en general. Para minimizar los efectos de la amenaza que presentan estos publicadores, en la última parte de la tesis presentaremos una nueva herramienta denominada TorrentGuard para la pronta detección de contenidos falsos. Esta herramienta puede accederse a través de un portal web y a través de un plugin del cliente de BitTorrent Vuze. Finalmente, presentamos MYPROBE, un portal web que permite consultar una base de datos con información actualizada sobre los publicadores de contenidos en BitTorrent

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Efficient Content Distribution With Managed Swarms

    Full text link
    Content distribution has become increasingly important as people have become more reliant on Internet services to provide large multimedia content. Efficiently distributing content is a complex and difficult problem: large content libraries are often distributed across many physical hosts, and each host has its own bandwidth and storage constraints. Peer-to-peer and peer-assisted download systems further complicate content distribution. By contributing their own bandwidth, end users can improve overall performance and reduce load on servers, but end users have their own motivations and incentives that are not necessarily aligned with those of content distributors. Consequently, existing content distributors either opt to serve content exclusively from hosts under their direct control, and thus neglect the large pool of resources that end users can offer, or they allow end users to contribute bandwidth at the expense of sacrificing complete control over available resources. This thesis introduces a new approach to content distribution that achieves high performance for distributing bulk content, based on managed swarms. Managed swarms efficiently allocate bandwidth from origin servers, in-network caches, and end users to achieve system-wide performance objectives. Managed swarming systems are characterized by the presence of a logically centralized coordinator that maintains a global view of the system and directs hosts toward an efficient use of bandwidth. The coordinator allocates bandwidth from each host based on empirical measurements of swarm behavior combined with a new model of swarm dynamics. The new model enables the coordinator to predict how swarms will respond to changes in bandwidth based on past measurements of their performance. In this thesis, we focus on the global objective of maximizing download bandwidth across end users in the system. To that end, we introduce two algorithms that the coordinator can use to compute efficient allocations of bandwidth for each host that result in high download speeds for clients. We have implemented a scalable coordinator that uses these algorithms to maximize system-wide aggregate bandwidth. The coordinator actively measures swarm dynamics and uses the data to calculate, for each host, a bandwidth allocation among the swarms competing for the host's bandwidth. Extensive simulations and a live deployment show that managed swarms significantly outperform centralized distribution services as well as completely decentralized peer-to-peer systems

    Pushing the performance of Biased Neighbor Selection through Biased Unchoking

    No full text

    Live Streaming with Gossip

    Get PDF
    Peer-to-peer (P2P) architectures have emerged as a popular paradigm to support the dynamic and scalable nature of distributed systems. This is particularly relevant today, given the tremendous increase in the intensity of information exchanged over the Internet. A P2P system is typically composed of participants that are willing to contribute resources, such as memory or bandwidth, in the execution of a collaborative task providing a benefit to all participants. File sharing is probably the most widely used collaborative task, where each participant wants to receive an individual copy of some file. Users collaborate by sending fragments of the file they have already downloaded to other participants. Sharing files containing multimedia content, files that typically reach the hundreds of megabytes to gigabytes, introduces a number of challenges. Given typical bandwidths of participants of hundreds of kilobits per second to a couple of megabits per second, it is unacceptable to wait until completion of the download before actually being able to use the file as the download represents a non negligible time. From the point of view of the participant, getting the (entire) file as fast as possible is typically not good enough. As one example, Video on Demand (VoD) is a scenario where a participant would like to start previewing the multimedia content (the stream), offered by a source, even though only a fraction of it has been received, and then continue the viewing while the rest of the content is being received. Following the same line of reasoning, new applications have emerged that rely on live streaming: the source does not own a file that it wants to share with others, but shares content as soon as it is produced. In other words, the content to distribute is live, not pre-recorded and stored. Typical examples include the broadcasting of live sports events, conferences or interviews. The gossip paradigm is a type of data dissemination that relies on random communication between participants in a P2P system, sharing similarities with the epidemic dissemination of diseases. An epidemic starts to spread when the source randomly chooses a set of communication partners, of size fanout, and infects them, i.e., it shares a rumor with them. This set of participants, in turn, randomly picks fanout communication partners each and infects them, i.e., share with them the same rumor. This paradigm has many advantages including fast propagation of rumors, a probabilistic guarantee that each rumor reaches all participants, high resilience to churn (i.e., participants that join and leave) and high scalability. Gossip therefore constitutes a candidate of choice for live streaming in large-scale systems. These advantages, however, come at a price. While disseminating data, gossip creates many duplicates of the same rumor and participants usually receive multiple copies of the same rumor. While this is obviously a feature when it comes to guaranteeing good dissemination of the rumor when churn is high, it is a clear disadvantage when spreading large amounts of multimedia data (i.e., ordered and time-critical) to participants with limited resources, namely upload bandwidth in the case of high-bandwidth content dissemination. This thesis therefore investigates if and how the gossip paradigm can be used as a highly effcient communication system for live streaming under the following specific scenarios: (i) where participants can only contribute limited resources, (ii) when these limited resources are heterogeneously distributed among nodes, and (iii) where only a fraction of participants are contributing their fair share of work while others are freeriding. To meet these challenges, this thesis proposes (i) gossip++: a gossip-based protocol especially tailored for live streaming that separates the dissemination of metadata, i.e., the location of the data, and the dissemination of the data itself. By first spreading the location of the content to interested participants, the protocol avoids wasted bandwidth in sending and receiving duplicates of the payload, (ii) HEAP: a fanout adaptation mechanism that enables gossip to adapt participants' contribution with respect to their resources while still preserving its reliability, and (iii) LiFT: a protocol to secure high-bandwidth gossip-based dissemination protocols against freeriders

    Filesharing und Abmahnwesen

    Get PDF
    Die Arbeit beinhaltet eine rechtsdogmatische und rechtstatsächliche Untersuchung des Phänomens Filesharing, mit einem Fokus auf die Haftung des Inhabers eines Internetanschlusses. Nach Erläuterung der für das Verständnis relevanten technischen Vorfragen folgt eine deskriptive Darstellung von Entwicklung und Stand der Rechtslage. Hierauf wird untersucht, wie aus dieser Rechtslage ein Abmahnwesen - ein in dieser Arbeit entwickelter Begriff - entstehen konnte. Im Anschluss an eine rechtspolitische Kritik und rechtsvergleichende Untersuchung wird diese Rechtslage dogmatisch kritisch gewürdigt. Die Arbeit schließt mit einer Darstellung der Entwicklungsmöglichkeiten de lege lata und de lege ferenda
    corecore