47 research outputs found

    DynCoDe:Dynamic Content Delivery for Internet Services

    No full text
    Web services on the Internet are increasingly relying on personalized content that is dynamically generated by server application code and customized for users. Delivering such personalized content increases computational load on servers and does not fit into the current Internet web caching model leading to an increase in user latency and bandwidth consumption. In this paper, we propose DynCoDe, a novel architecture for efficient delivery of personalized web services that integrates the distribution, caching and generation of personalized content. In the DynCoDe architecture, resource intensive processes for content generation and reusable content components are pushed to the network edges increasing server scalability and content availability while reducing user latency and backbone Internet traffic. We evaluate DynCoDe under real world network conditions and show significant improvements in bandwidth consumption, user latency and server scalability

    Harnessing heterogeneous resources for improving Internet data transfer

    No full text
    The proliferation of wide-area data-intensive applications such as peer-to-peer file sharing and large web downloads has escalated the importance of high throughput reliable data transfers. This dissertation develops techniques to efficiently use available resources such as infrastructure overlay nodes, network peers and local disk in order to systematically improve Internet data transfer performance. We first focus on improving performance between a single sender-receiver pair by addressing the end-to-end feedback limitation in TCP, a popular transport protocol. Our solution, Slot, shortens the end-to-end feedback loop using resources from an infrastructure overlay network, thereby improving overall performance. This dissertation addresses design challenges in Slot for discovering an efficient overlay path in a scalable and practical fashion and for supporting multiple clients using a common overlay. At the core of Slot is a measurement infrastructure that monitors network properties for different overlay paths. The design of this infrastructure brings forth interesting questions regarding the dynamics of network properties and their time scales. This dissertation answers these questions using network delay as an example. An understanding of the dynamics of network properties and an overlay-based data transfer that exploits such knowledge together make Slot an effective way to improve data transfer between a sender-receiver pair. Though Slot improves the performance between a single sender-receiver pair, a single sender is sometimes unable to saturate the download bandwidth of a receiver, possibly because of the bandwidth asymmetry in the access links of end-hosts or network congestion in the core of the Internet. Multi-source transfers (e.g., BitTorrent) attempt to address this limited single-source problem. Our observation, however, indicates that bulk transfers are still slow despite current multi-source systems. In the second part of this dissertation, we investigate techniques that exploit additional resources to improve multi-source transfer performance. We present the design and implementation of SET, a system that locates available identical and similar sources for data objects using a constant number of lookups and inserting a constant number of mappings per object into a global database. We also consider the use of disk as an additional resource that can provide content required to complete a data transfer. Finally, we propose the design, implementation and evaluation of dsync, a file transfer system that can dynamically adapt to a wide variety of environments while using all available resources to improve transfer performance. While many transfer systems work well in their specialized context, their performance comes at the cost of generality, and they perform poorly when used elsewhere. In contrast, dsync adapts to its environment by intelligently determining which of its available resources is the best to use at any given time. Our experience shows that Slot can improve the throughput of a single sender-receiver connection by 60-100%. dsync can combine benefits from using identical and similar sources over the network with local resources from a disk to outperform existing systems by a factor of 1.4 to 5 in one-to-one and one-to-many transfers

    Slot: Shortened Loop Internet Transport using Overlay Networks

    Get PDF
    Overlay routing has emerged as a promising approach to mitigating many problems with Internet routing, such as improving the reliability of Internet paths and supporting multicast communication. As overlay routing is gaining wider acceptance, we argue that it is time t o investigate how overlay networks can benefit Internet transport. This paper presents Slot, a framework that leverages overlay networks to improve the throughput of feedback-based transport protocols. Slot exploits the observation that the throughput of feedback-based transport protocols (e.g., TCP, XCP, VCP, DCCP) is inversely proportional to the length of their end-to-end feedback control loop, and effectively shortens an end-to-end control loop by breaking it up into multiple pipelined shortened sub-loops via intermediaries carefully chosen from an overlay network. As a result, Slot increases the throughput of an end-to-end transport connection to that of the longest sub-loop. This paper studies the potential of Slot and addresses key challenges in the design and the deployment of Slot. The contributions of this paper are three-fold. First, we make the case for Slot by measuring and analyzing the control loop lengths of close to 3.7 million node pairs and their potential benefit from Slot using PlanetLab as an example overlay network. Second, we identify key challenges in the design of Slot and show that a simple, low overhead solution can be used to select an overlay path that can achieve close to the maximum throughput improvement possible. Third, we implement a prototype of Slot and deploy it on PlanetLab to fetch a large set of files crawled from popular web servers. Our results show that compared to directly fetching the same documents, Slot improves the throughput of 95% of the large file transfers, and 50% of these transfers achieve more than 30% increase in throughput

    Mitigating the Gateway Bottleneck via Transparent Cooperative Caching in Wireless Mesh Networks

    Get PDF
    Wireless mesh networks (WMNs) have been proposed to provide cheap, easily deployable and robust Internet access. The dominant Internet-bound traffic from clients causes a congestion bottleneck around the gateway, which can significantly limit the throughput of the WMN clients in accessing the Internet. In this paper, we present MeshCache, a transparent caching system for WMNs that exploits the locality in client Internet-bound traffic to mitigate the bottleneck effect at the gateway, thereby improving client perceived performance. MeshCache leverages the fact that a WMN typically spans a small geographic area and hence mesh routers are easily over-provisioned with CPU, memory, and disk storage, and extends the individual wireless mesh routers in a WMN with built-in content caching functionality. It then performs cooperative caching among the wireless mesh routers. We explore two architecture designs for MeshCache: (1) caching at every client access mesh router upon file download, and (2) caching at each mesh router along the route the Internet traffic travels, which requires breaking a single end-to-end transport connection into multiple single-hop transport connections along the route. We also leverage the abundant research results from cooperative web caching in the Internet in designing cache selection protocols for efficiently locating caches containing data objects for these two architectures. We further compare these two MeshCache designs with caching at the gateway router only. Through extensive simulations and evaluations using a prototype implementation on a testbed, we find that MeshCache can significantly improve the performance of client nodes in WMNs. In particular, our experiments with a Squid-based MeshCache implementation deployed on the MAP mesh network testbed with 15 routers show that compared to caching at the gateway only, the MeshCache architecture with hop-by-hop caching reduces the load at the gateway by 38%, improves the average client throughput by 170%, and increases the number of transfers that achieve a throughput greater than 1 Mbps by a factor of 3

    Distributed Hashing for Scalable Multicast in Wireless Ad Hoc Networks

    Get PDF
    Several multicast protocols for mobile ad hoc networks (MANETs) have been proposed that build multicast trees using location information available from GPS or localization algorithms and use geographic forwarding to forward packets down the multicast trees. These stateless multicast protocols carry encoded membership, location and tree information in each packet. Stateless protocols are more efficient and robust than stateful protocols (ADMR, ODMRP) as they avoid the difficulty of maintaining distributed states in the presence of frequent topology changes in MANETs. However, stateless locationbased multicast protocols are not scalable to large groups because they encode group membership in the header of each data packet, i.e. they incur a per-packet encoding overhead. Additionally, such protocols involve centralized group membership and location management, either at the tree root or the traffic source. In this work, we present the Hierarchical Rendezvous Point Multicast (HRPM) protocol which significantly improves the scalability of stateless location-based multicast with respect to the group size. HRPM incorporates two key design ideas: (1) hierarchical decomposition of multicast groups, and (2) use of distributed geographic hashing to construct and maintain such a hierarchy efficiently. HRPM organizes a large group into a hierarchy of recursively organized manageable-sized subgroups in an effort to reduce per-packet encoding overhead. More importantly, HRPM constructs and maintains this hierarchy at virtually no cost using distributed hashing; distributed hashing is recursively applied at each subgroup for group management and avoids the potentially high cost associated with maintaining distributed state at mobile nodes. The hierarchical organization and the distributed hashing property also allows HRPM to scale to large networks and large numbers of groups. Performance results obtained via detailed simulations demonstrate that HRPM achieves enhanced scalability and performance. Coupled with its leverage of stateless geographic forwarding, HRPM scales well in terms of the group size, the number of groups, the number of sources, as well as the size of the network. In particular, HRPM maintains close to 95% multicast delivery ratio while incurring on average 5.5% per packet tree-encoding overhead for up to 250 group members in a 500-node network. Furthermore, it achieves a steady 95% delivery ratio while incurring nearly constant overhead as the number of groups increases from 2 to 45, while keeping the total number of receivers constant at 180, in a 500-node network. Lastly, it steadily achieves above 90% delivery ratio as the network scales up to 1000 nodes with up to 30% group members. As a reference, we also compared HRPM to ODMRP, a state-of-the-art topology-based multicast protocol that is scalable to large groups. HRPM performs comparably to ODMRP across a wide range of group sizes. More over, HRPM outperforms ODMRP when the network size, the number of groups, or the number of sources increases

    DynCoDe: An Architecture for Transparent Dynamic Content Delivery

    No full text
    Delivery of web content is increasingly using dynamic and personalized content. Caching has been extensively studied for reducing the client latency and bandwidth requirements for static content. There has been recent interest in schemes to exploit locality in dynamic web content [1, 2]. We propose a novel scheme that integrates the distribution and caching of personalized content which rely heavily on dynamic generation of content. In the proposed architecture, resource intensive processes involved in content generation are pushed to the network edges. We have performed a preliminary evaluation of the architecture under real world network conditions and have noticed significant improvements in bandwidth consumption, user response time and server scalability showing the feasibility of such a scheme

    Overlay Node Placement: Analysis, Algorithms and Impact on Applications

    Get PDF
    Overlay routing has emerged as a promising approach to improving performance and reliability of Internet paths. To fully realize the potential of overlay routing under the constraints of deployment costs in terms of hardware, network connectivity and human effort, it is critical to carefully place infrastructure overlay nodes to balance the trade-off between performance and resource constraints. In this paper, we investigate approaches to perform intelligent placement of overlay nodes to facilitate (i) resilient routing and (ii) TCP performance improvement. We formulate objective functions to accurately capture application behavior: reliability and TCP performance, and develop several placement algorithms, which offer a wide range of trade-offs in complexity and required knowledge of the client-server location and traffic load. Using simulations on synthetic and real Internet topologies, and PlanetLab experiments, we demonstrate the effectiveness of the placement algorithms and objective functions developed, respectively. We conclude that an approach, hybrid of random and greedy approaches, provides the best tradeoff between computational efficiency and accuracy. We also uncover the fundamental challenge in simultaneously optimizing for reliability and TCP performance, and propose a simple unified algorithm to achieve the same
    corecore