2,026 research outputs found

    End-to-end QoE optimization through overlay network deployment

    Get PDF
    In this paper an overlay network for end-to-end QoE management is presented. The goal of this infrastructure is QoE optimization by routing around failures in the IP network and optimizing the bandwidth usage on the last mile to the client. The overlay network consists of components that are located both in the core and at the edge of the network. A number of overlay servers perform end-to-end QoS monitoring and maintain an overlay topology, allowing them to route around link failures and congestion. Overlay access components situated at the edge of the network are responsible for determining whether packets are sent to the overlay network, while proxy components manage the bandwidth on the last mile. This paper gives a detailed overview of the end-to-end architecture together with representative experimental results which comprehensively demonstrate the overlay network's ability to optimize the QoE

    EGOIST: Overlay Routing Using Selfish Neighbor Selection

    Full text link
    A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a prototype overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using measurements on PlanetLab and trace-based simulations, we demonstrate that Egoist's neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we discuss some of the potential benefits Egoist may offer to applications.National Science Foundation (CISE/CSR 0720604, ENG/EFRI 0735974, CISE/CNS 0524477, CNS/NeTS 0520166, CNS/ITR 0205294; CISE/EIA RI 0202067; CAREER 04446522); European Commission (RIDS-011923

    Large scale probabilistic available bandwidth estimation

    Full text link
    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a path while achieving, with specified probability, an output rate that is almost as large as the input rate. PAB is expressed directly in terms of the measurable output rate and includes adjustable parameters that allow the user to adapt to different application requirements. Our probabilistic framework to estimate network-wide probabilistic available bandwidth is based on packet trains, Bayesian inference, factor graphs and active sampling. We deploy our tool on the PlanetLab network and our results show that we can obtain accurate estimates with a much smaller measurement overhead compared to existing approaches.Comment: Submitted to Computer Network

    Cross-Layer Peer-to-Peer Track Identification and Optimization Based on Active Networking

    Get PDF
    P2P applications appear to emerge as ultimate killer applications due to their ability to construct highly dynamic overlay topologies with rapidly-varying and unpredictable traffic dynamics, which can constitute a serious challenge even for significantly over-provisioned IP networks. As a result, ISPs are facing new, severe network management problems that are not guaranteed to be addressed by statically deployed network engineering mechanisms. As a first step to a more complete solution to these problems, this paper proposes a P2P measurement, identification and optimisation architecture, designed to cope with the dynamicity and unpredictability of existing, well-known and future, unknown P2P systems. The purpose of this architecture is to provide to the ISPs an effective and scalable approach to control and optimise the traffic produced by P2P applications in their networks. This can be achieved through a combination of different application and network-level programmable techniques, leading to a crosslayer identification and optimisation process. These techniques can be applied using Active Networking platforms, which are able to quickly and easily deploy architectural components on demand. This flexibility of the optimisation architecture is essential to address the rapid development of new P2P protocols and the variation of known protocols

    Overlay networks monitoring

    Get PDF
    The phenomenal growth of the Internet and its entry into many aspects of daily life has led to a great dependency on its services. Multimedia and content distribution applications (e.g., video streaming, online gaming, VoIP) require Quality of Service (QoS) guarantees in terms of bandwidth, delay, loss, and jitter to maintain a certain level of performance. Moreover, E-commerce applications and retail websites are faced with increasing demand for better throughput and response time performance. The most practical way to realize such applications is through the use of overlay networks, which are logical networks that implement service and resource management functionalities at the application layer. Overlays offer better deployability, scalability, security, and resiliency properties than network layer based implementation of services. Network monitoring and routing are among the most important issues in the design and operation of overlay networks. Accurate monitoring of QoS parameters is a challenging problem due to: (i) unbounded link stress in the underlying IP network, and (ii) the conflict in measurements caused by spatial and temporal overlap among measurement tasks. In this context, the focus of this dissertation is on the design and evaluation of efficient QoS monitoring and fault location algorithms using overlay networks. First, the issue of monitoring accuracy provided by multiple concurrent active measurements is studied on a large-scale overlay test-bed (PlanetLab), the factors affecting the accuracy are identified, and the measurement conflict problem is introduced. Then, the problem of conducting conflict-free measurements is formulated as a scheduling problem of real-time tasks, its complexity is proven to be NP-hard, and efficient heuristic algorithms for the problem are proposed. Second, an algorithm for minimizing monitoring overhead while controlling the IP link stress is proposed. Finally, the use of overlay monitoring to locate IP links\u27 faults is investigated. Specifically, the problem of designing an overlay network for verifying the location of IP links\u27 faults, under cost and link stress constraints, is formulated as an integer generalized flow problem, and its complexity is proven to be NP-hard. An optimal polynomial time algorithm for the relaxed problem (relaxed link stress constraints) is proposed. A combination of simulation and experimental studies using real-life measurement tools and Internet topologies of major ISP networks is conducted to evaluate the proposed algorithms. The studies show that the proposed algorithms significantly improve the accuracy and link stress of overlay monitoring, while incurring low overheads. The evaluation of fault location algorithms show that fast and highly accurate verification of faults can be achieved using overlay monitoring. In conclusion, the holistic view taken and the solutions developed for network monitoring provide a comprehensive framework for the design, operation, and evolution of overlay networks

    Implementation and Deployment of a Distributed Network Topology Discovery Algorithm

    Full text link
    In the past few years, the network measurement community has been interested in the problem of internet topology discovery using a large number (hundreds or thousands) of measurement monitors. The standard way to obtain information about the internet topology is to use the traceroute tool from a small number of monitors. Recent papers have made the case that increasing the number of monitors will give a more accurate view of the topology. However, scaling up the number of monitors is not a trivial process. Duplication of effort close to the monitors wastes time by reexploring well-known parts of the network, and close to destinations might appear to be a distributed denial-of-service (DDoS) attack as the probes converge from a set of sources towards a given destination. In prior work, authors of this report proposed Doubletree, an algorithm for cooperative topology discovery, that reduces the load on the network, i.e., router IP interfaces and end-hosts, while discovering almost as many nodes and links as standard approaches based on traceroute. This report presents our open-source and freely downloadable implementation of Doubletree in a tool we call traceroute@home. We describe the deployment and validation of traceroute@home on the PlanetLab testbed and we report on the lessons learned from this experience. We discuss how traceroute@home can be developed further and discuss ideas for future improvements
    • …
    corecore