10 research outputs found

    ISP-friendly Peer-assisted On-demand Streaming of Long Duration Content in BBC iPlayer

    Full text link
    In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness - requiring peers to be within the same ISP, bitrate stratification - the need to match peers with others needing similar bitrate, and partial participation - some peers choosing not to redistribute content. This work relates potential gains from peer assistance to the average number of users in a swarm, its capacity, and empirically studies the effects of these obstacle factors at scale, using a month-long trace of over 2 million users in London accessing BBC shows online. Results indicate that even when P2P swarms are localised within ISPs, up to 88% of traffic can be saved. Surprisingly, bitrate stratification results in 2 large sub-swarms and does not significantly affect savings. However, partial participation, and the need for a minimum swarm size do affect gains. We investigate improvements to gain from increasing content availability through two well-studied techniques: content bundling - combining multiple items to increase availability, and historical caching of previously watched items. Bundling proves ineffective as increased server traffic from larger bundles outweighs benefits of availability, but simple caching can considerably boost traffic gains from peer assistance.Comment: In Proceedings of IEEE INFOCOM 201

    The Internet-Wide Impact of P2P Traffic Localization on ISP Profitability

    Get PDF
    We conduct a detailed simulation study to examine how localizing P2P traffic within network boundaries impacts the profitability of an ISP. A distinguishing aspect of our work is the focus on Internet-wide implications, i.e., how adoption of localization within an ISP affects both itself and other ISPs. Our simulations are based on detailed models that estimate inter-autonomous-system (AS) P2P traffic and inter-AS routing, localization models that predict the extent to which P2P traffic is reduced, and pricing models that predict the impact of changes in traffic on the profit of an ISP. We evaluate our models by using a large-scale crawl of BitTorrent containing over 138 million users sharing 2.75 million files. Our results show that the benefits of localization must not be taken for granted. Some of our key findings include: 1) residential ISPs can actually lose money when localization is employed, and some of them will not see increased profitability until other ISPs employ localization; 2) the reduction in costs due to localization will be limited for small ISPs and tends to grow only logarithmically with client population; and 3) some ISPs can better increase profitability through alternate strategies to localization by taking advantage of the business relationships they have with other ISP

    BitTorrent locality and transit trafficreduction: When, why, and at what cost?

    Get PDF
    A substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Second, we go beyond individual case studies and present results for few thousands ISPs represented in our data set of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. Finally, we develop scalable methodologies that allow us to process this huge data set and derive accurate traffic matrices of torrents. Using the previous methods we obtain the following main findings: i) Although there are a large number of very small ISPs without enough resources for localizing traffic, by analyzing the 100 largest ISPs we show that Locality policies are expected to significantly reduce the transit traffic with respect to the default random overlay construction method in these ISPs; ii) contrary to the popular belief, increasing the access speed of the clients of an ISP does not necessarily help to localize more traffic; iii) by studying several real ISPs, we have shown that soft speed-aware locality policies guarantee win-win situations for ISPs and end users. Furthermore, the maximum transit traffic savings that an ISP can achieve without limiting the number of inter-ISP overlay links is bounded by “unlocalizable” torrents with few local clients. The application of restrictions in the number of inter-ISP links leads to a higher transit traffic reduction but the QoS of clients downloading “unlocalizable” torrents would be severely harmed.The research leading to these results has been partially funded by the European Union's FP7 Program under the projects eCOUSIN (318398) and TREND (257740), the Spanish Ministry of Economy and Competitiveness under the eeCONTENT project (TEC2011-29688-C02-02), and the Regional Government of Madrid under the MEDIANET Project (S2009/TIC-1468).Publicad

    Exploring Peer-to-Peer Locality in Multiple Torrent Environment

    Full text link

    Deep diving into BitTorrent locality

    Full text link
    a

    Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments

    Get PDF
    In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud. In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles. This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality. In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic. I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation. Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approac

    Improving End-to-End Internet Performance by Detouring

    No full text
    The Internet provides a best-effort service, which gives a robust fault-tolerant network. However, the performance of the paths found in regular Internet routing is suboptimal. As a result, applications rarely achieve all the benefits that the Internet can provide. The problem is made more difficult because the Internet is formed of competing ISPs which have little incentives to reveal information about the performance of Internet paths. As a result, the Internet is sometimes referred as a ‘black-box’. Detouring uses routing overlay networks to find alternative paths (or detour paths) that can improve reliability, latency and bandwidth. Previous work has shown detouring can improve the Internet. However, one important issue remains—how can these detour paths be found without conducting large-scale measurements? In this thesis, we describe practical methods for discovering detour paths to improve specific performance metrics that are scalable to the Internet. Particularly we concentrate our efforts on two metrics, latency and bandwidth, which are arguably the two most important performance metrics for end-user’s applications. Taking advantage of the Internet topology, we show how nodes can learn about segments of Internet paths that can be exploited by detouring leading to reduced path latencies. Next, we investigate bandwidth detouring revealing constructive detour properties and effective mechanisms to detour paths in overlay networks. This leads to Ukairo, our bandwidth detouring platform that is scalable to the Internet and tcpChiryo, which predicts bandwidth in an overlay network through measuring a small portion of the network
    corecore