112 research outputs found

    Reducing BitTorrent Download Time via Handshake-Based Switching

    Get PDF
    Peer-to-peer networking overcomes the single point of failure and bandwidth limitations inherent to the centralized server model of file-sharing. It is both a popular means of sharing digital content and a major consumer of internet traffic, with BitTorrent being the most-used protocol. As such, significant research has gone into improving peer-to-peer performance in order to reduce both download times and networking costs. One aspect that can affect performance is the client’s selection of peers to download from, as the time spent downloading from even a single poor-performing peer can impact the overall download duration. A recent peer selection strategy explored having a client use historical knowledge acquired through third-party sources, as well as its own first-hand experience with previously visited peers, as a means of selecting likely good-performers, coupled with a peer switching strategy that replaced peers whose post-selection downloads exhibited poor performance contrary to what historical knowledge suggested in order to limit the time spent downloading from said poor-performers Though this tactic demonstrated reduced download times compared to various past works, it still suffered from poor peer selection due to its historical knowledge not necessarily reflecting the current state of the peers. This work introduced and examined an enhancement to this hybrid peer selection and switching strategy by adding current intelligence regarding a peer’s available bandwidth, all the while avoiding the additional network costs associated with performing on-the-fly probing or querying techniques utilized by other peer selection strategies to benchmark prospective peers. With such on-the-fly knowledge about a peer’s current bandwidth availability, this new enhanced strategy quickly replaced poor performers without waiting for downloads to be performed and subsequently benchmarked, resulting in reduced overall peer-to-peer download times. The results of adding this pre-download peer switching enhancement demonstrated improved download performance, particularly in early file transfer runs. However, as more runs occurred and the benefits of the original strategy’s historical knowledge became more pronounced, the time savings gained from this new enhancement diminished

    Reducing the Download Time in Stochastic P2P Content Delivery Networks by Improving Peer Selection

    Get PDF
    Peer-to-peer (P2P) applications have become a popular method for obtaining digital content. Recent research has shown that the amount of time spent downloading from a poor performing peer effects the total download duration. Current peer selection strategies attempt to limit the amount of time spent downloading from a poor performing peer, but they do not use both advanced knowledge and service capacity after the connection has been made to aid in peer selection. Advanced knowledge has traditionally been obtained from methods that add additional overhead to the P2P network, such as polling peers for service capacity information, using round trip time techniques to calculate the distance between peers, and by using tracker peers. This work investigated the creation of a new download strategy that replaced the random selection of peers with a method that selects server peers based on historic service capacity and ISP in order to further reduce the amount of time needed to complete a download session. Peer-to-peer (P2P) applications have become a popular method for obtaining digital content. Recent research has shown that the amount of time spent downloading from a poor performing peer effects the total download duration. Current peer selection strategies attempt to limit the amount of time spent downloading from a poor performing peer, but they do not use both advanced knowledge and service capacity after the connection has been made to aid in peer selection. Advanced knowledge has traditionally been obtained from methods that add additional overhead to the P2P network, such as polling peers for service capacity information, using round trip time techniques to calculate the distance between peers, and by using tracker peers. This work investigated the creation of a new download strategy that replaced the random selection of peers with a method that selects server peers based on historic service capacity and ISP in order to further reduce the amount of time needed to complete a download session. The results of this new historic based peer selection strategy have shown that there are benefits in using advanced knowledge to select peers and only replacing the worst performing peers. This new approach showed an average download duration improvement of 16.6% in the single client simulation and an average cross ISP traffic reduction of 55.17% when ISPs were participating in cross ISP throttling. In the multiple clients simulation the new approach showed an average download duration improvement of 53.31% and an average cross ISP traffic reduction of 88.83% when ISPs were participating in cross ISP throttling. This new approach also significantly improved the consistency of the download duration between download sessions allowing for the more accurate prediction of download times

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Video-on-Demand over Internet: a survey of existing systems and solutions

    Get PDF
    Video-on-Demand is a service where movies are delivered to distributed users with low delay and free interactivity. The traditional client/server architecture experiences scalability issues to provide video streaming services, so there have been many proposals of systems, mostly based on a peer-to-peer or on a hybrid server/peer-to-peer solution, to solve this issue. This work presents a survey of the currently existing or proposed systems and solutions, based upon a subset of representative systems, and defines selection criteria allowing to classify these systems. These criteria are based on common questions such as, for example, is it video-on-demand or live streaming, is the architecture based on content delivery network, peer-to-peer or both, is the delivery overlay tree-based or mesh-based, is the system push-based or pull-based, single-stream or multi-streams, does it use data coding, and how do the clients choose their peers. Representative systems are briefly described to give a summarized overview of the proposed solutions, and four ones are analyzed in details. Finally, it is attempted to evaluate the most promising solutions for future experiments. RĂ©sumĂ© La vidĂ©o Ă  la demande est un service oĂč des films sont fournis Ă  distance aux utilisateurs avec u

    Efficient packet delivery in modern communication networks

    Get PDF
    Modern communication networks are often designed for diverse applications, such as voice, data and video. Packet-switching is often adapted in today’s networks to transmit multiple types of traffic. In packet-switching networks, network performance is directly affected by how the networks handle their packets. This work addresses the packet-handling issues from the following two aspects: Quality of Service (QoS) and network coding. QoS has been a well-addressed issue in the study of IP-based networks. Generally, nodes in a network need to be informed of the state of each communication link in order to make intelligent decisions to route packets according to their QoS demands. The link state can, however, change rapidly in a network; therefore, nodes would have to receive frequent link state updates in order to maintain the latest link state information at all times. Frequent link state updating is resource-consuming and hence impractical in network design. Therefore, there is a trade-off between the link state updating frequency and the QoS routing performance. It is necessary to design a link state update algorithm that utilizes less frequent link state updates to achieve a high degree of satisfaction in QoS performance. The first part of this work addresses this link state update problem and provides two solutions: ROSE and Smart Packet Marking. ROSE is a class-based link state update algorithm, in which the class boundaries are designed based on the statistical data of users’ QoS requests. By doing so, link state update is triggered only when certain necessary conditions are met. For example, if the available bandwidth of a link is fluctuating within a range that is higher than the highest possible bandwidth request, there is no need to update the state of this link. Smart Packet Marking utilizes a similar concept like ROSE, except that the link state information is carried in the probing packet sent in conjunction with each connection request instead of through link state updates. The second part of this work addresses the packet-handling issue by means of network coding. Instead of the traditional store-and-forward approach, network coding allows intermediate nodes in a multi-hop path to code multiple packets into one in order to reduce bandwidth consumption. The coded packet can later be decoded by its recipients to retrieve the original plain packet. Network coding is found to be beneficial in many network applications. This dissertation makes contributions in network coding in two areas: peer-to-peer file sharing and wireless ad-hoc networks. The benefit of network coding in peer-to-peer file sharing networks is analyzed, and a network coding algorithm – Downloader-Initiated Random Linear Network Coding (DRLNC) – is proposed. DLRNC shifts the coding decision from the seeders to the leechers; by doing so it solves the “collision” problem without increasing the field size. In wireless network coding, this work addresses the implementation difficulty pertaining to MAC layer scheduling. To achieve the ideal network coding gain in wireless networks, it requires perfect MAC layer scheduling. This dissertation first provides an algorithm to solve the ideal-case MAC layer scheduling problem. Since the ideal MAC layer schedule is often difficult to realize, a practical approach is then proposed to increase the network coding performance by modifying the ACK packets in the 802.11 MAC

    Efficient Content Distribution With Managed Swarms

    Full text link
    Content distribution has become increasingly important as people have become more reliant on Internet services to provide large multimedia content. Efficiently distributing content is a complex and difficult problem: large content libraries are often distributed across many physical hosts, and each host has its own bandwidth and storage constraints. Peer-to-peer and peer-assisted download systems further complicate content distribution. By contributing their own bandwidth, end users can improve overall performance and reduce load on servers, but end users have their own motivations and incentives that are not necessarily aligned with those of content distributors. Consequently, existing content distributors either opt to serve content exclusively from hosts under their direct control, and thus neglect the large pool of resources that end users can offer, or they allow end users to contribute bandwidth at the expense of sacrificing complete control over available resources. This thesis introduces a new approach to content distribution that achieves high performance for distributing bulk content, based on managed swarms. Managed swarms efficiently allocate bandwidth from origin servers, in-network caches, and end users to achieve system-wide performance objectives. Managed swarming systems are characterized by the presence of a logically centralized coordinator that maintains a global view of the system and directs hosts toward an efficient use of bandwidth. The coordinator allocates bandwidth from each host based on empirical measurements of swarm behavior combined with a new model of swarm dynamics. The new model enables the coordinator to predict how swarms will respond to changes in bandwidth based on past measurements of their performance. In this thesis, we focus on the global objective of maximizing download bandwidth across end users in the system. To that end, we introduce two algorithms that the coordinator can use to compute efficient allocations of bandwidth for each host that result in high download speeds for clients. We have implemented a scalable coordinator that uses these algorithms to maximize system-wide aggregate bandwidth. The coordinator actively measures swarm dynamics and uses the data to calculate, for each host, a bandwidth allocation among the swarms competing for the host's bandwidth. Extensive simulations and a live deployment show that managed swarms significantly outperform centralized distribution services as well as completely decentralized peer-to-peer systems

    Distributed, Secure Load Balancing with Skew, Heterogeneity, and Churn

    Get PDF
    Numerous proposals exist for load balancing in peer-to-peer (p2p) networks. Some focus on namespace balancing, making the distance between nodes as uniform as possible. This technique works well under ideal conditions, but not under those found empirically. Instead, researchers have found heavytailed query distributions (skew), high rates of node join and leave (churn), and wide variation in node network and storage capacity (heterogeneity). Other approaches tackle these less-thanideal conditions, but give up on important security properties. We propose an algorithm that both facilitates good performance and does not dilute security. Our algorithm, k-Choices, achieves load balance by greedily matching nodes’ target workloads with actual applied workloads through limited sampling, and limits any fundamental decrease in security by basing each nodes’ set of potential identifiers on a single certificate. Our algorithm compares favorably to four others in trace-driven simulations. We have implemented our algorithm and found that it improved aggregate throughput by 20% in a widely heterogeneous system in our experiments.Engineering and Applied Science
    • 

    corecore