738 research outputs found

    Buffer Sizing for 802.11 Based Networks

    Get PDF
    We consider the sizing of network buffers in 802.11 based networks. Wireless networks face a number of fundamental issues that do not arise in wired networks. We demonstrate that the use of fixed size buffers in 802.11 networks inevitably leads to either undesirable channel under-utilization or unnecessary high delays. We present two novel dynamic buffer sizing algorithms that achieve high throughput while maintaining low delay across a wide range of network conditions. Experimental measurements demonstrate the utility of the proposed algorithms in a production WLAN and a lab testbed.Comment: 14 pages, to appear on IEEE/ACM Transactions on Networkin

    Link Buffer Sizing: a New Look at the Old Problem

    Get PDF
    In this paper, we revisit the question of how much buffer an IP router should allocate for its output link. For a long time, the intuitive answer of setting the buffer size to the bitrate-delay product has been widely regarded as reasonable. Recent studies of interaction between queueing at IP routers and TCP congestion control proposed alternative answers. First, we expose and explain contradictions between existing guidelines for link buffer sizing. Then, we argue that the problem of link buffer sizing needs a different formulation. In particular, the chosen buffer size should accommodate not only common versions of TCP but also UDP traffic. Besides, our new formulation of the problem contains an explicit constraint of not engaging IP routers in any additional signaling. We conclude the paper by outlining a promising direction for solving the reformulated problem

    Networking Mechanisms for Delay-Sensitive Applications

    Get PDF
    The diversity of applications served by the explosively growing Internet is increasing. In particular, applications that are sensitive to end-to-end packet delays become more common and include telephony, video conferencing, and networked games. While the single best-effort service of the current Internet favors throughput-greedy traffic by equipping congested links with large buffers, long queuing at the congested links hurts the delay-sensitive applications. Furthermore, while numerous alternative architectures have been proposed to offer diverse network services, the innovative alternatives failed to gain widespread end-to-end deployment. This dissertation explores different networking mechanisms for supporting low queueing delay required by delay-sensitive applications. In particular, it considers two different approaches. The first one assumes employing congestion control protocols for the traffic generated by the considered class of applications. The second approach relies on the router operation only and does not require support from end hosts

    EBDP BUFFER SIZING STRATEGY 802.11 BASED WLANS

    Get PDF
    In this paper we present wired routers, for whom the sizing of buffers is an active research topic. The classical rule of thumb for sizing wired buffers is to set buffer sizes to be the product of the bandwidth and the average delay of the flows utilizing this link, namely the Bandwidth-Delay Product (BDP) rule. Surprisingly, however the sizing of buffers in wireless networks (especially those based on 802.11/802.11e) appears to have received very little attention within the networking community. Exceptions include the recent work in relating to buffer sizing for voice traffic in 802.11e WLANs, work in which considers the impact of buffer sizing on TCP upload/download fairness, and work in which is related to 802.11e parameter settings

    Model based analysis of some high speed network issues

    Get PDF
    The study of complex problems in science and engineering today typically involves large scale data, huge number of large-scale scientific breakthroughs critically depends on large multi-disciplinary and geographically-dispersed research teams, where the high speed network becomes the integral part. To serve the ongoing bandwidth requirement and scalability of these networks, there has been a continuous evolution of different TCPs for high speed networks. Testing these protocols on a real network would be expensive, time consuming and more over not easily available to the researchers worldwide. Network simulation is well accepted and widely used method for performance evaluation, it is well known that packet-based simulators like NS2 and Opnet are not adequate in high speed also in large scale networks because of its inherent bottlenecks in terms of message overhead and execution time. In that case model based approach with the help of a set of coupled differential equations is preferred for simulations. This dissertation is focused on the key challenges on research and development of TCPs on high-speed network. To address these issues/challenges this thesis has three objectives: design an analytical simulation methodology; model behaviors of high speed networks and other components including TCP flows and queue using the analytical simulation method; analyze them and explore impacts and interrelationship among them. To decrease the simulation time and speed up the process of testing and development of high speed TCP, we present a scalable simulation methodology for high speed network. We present the fluid model equations for various high-speed TCP variants. With the help of these fluid model equations, the behavior of high-speed TCP variants under various scenarios and its effect on queue size variations are presented. High speed network is not feasible unless we understand effect of bottleneck buffer size on performance of these high-speed TCP variants. A fluid model is introduced to accommodate the new observations of synchronization and de-synchronization phenomena of packet losses at bottleneck link and a microscopic analysis is presented on different buffer sizes at drop-tail queuing scheme. The proposed model based methods promotes principal understanding of the future heterogeneous networks and accelerates protocol developments

    Enabling a Low-delay Internet Service via Built-in Performance Incentives

    Get PDF
    The single best-effort service of the Internet struggles to accommodate divergent needs of different distributed applications. Numerous alternative network architectures have been proposed to offer diversified network services. These innovative solutions failed to gain wide deployment primarily due to economic and legacy issues rather than technical shortcomings. Our paper presents a new simple paradigm for network service differentiation that accounts explicitly for the multiplicity of Internet service providers and users as well as their economic interests in environments with partly deployed new services. Our key idea is to base the service differentiation on performance itself, rather than price. We design RD (Rate-Delay) network services that give a user an opportunity to choose between a higher transmission rate or low queuing delay at a congested network link. To support the two services, an RD router maintains two queues per output link and achieves the intended ratedelay differentiation through simple link scheduling and dynamic buffer sizing. Our extensive evaluation of the RD network services reports their performance, deployability, and security properties

    Trading link utilization for queueing delays: an adaptive approach

    Get PDF
    Understanding the relationship between queueing delays and link utilization for general traffic conditions is an important open problem in networking research. Difficulties in understanding this relationship stem from the fact that it depends on the complex nature of arriving traffic and the problems associated with modelling such traffic. Existing AQM schemes achieve a "low delay" and "high utilization" by responding early to congestion without considering the exact relationship between delay and utilization. However, in the context of exploiting the delay/utilization tradeoff, the optimal choice of a queueing scheme's control parameter depends on the cost associated with the relative importance of queueing delay and utilization. The optimal choice of control parameter is the one that maximizes a benefit that can be defined as the difference between utilization and cost associated with queuing delay. We present two practical algorithms, Optimal Drop-Tail (ODT) and Optimal BLUE (OB), that are designed with a common performance goal: namely, maximizing this benefit. Their novelty lies in fact that they maximize the benefit in an online manner, without requiring knowledge of the traffic conditions, specific delay-utilization models, nor do they require complex parameter estimation. Packet level ns2 simulations are given to demonstrate the efficacy of the proposed algorithms and the framework in which they are designed
    corecore