34 research outputs found

    TCP Libra: Exploring RTT-Fairness for TCP

    Full text link
    The majority of Internet users rely on the Transmission Control Protocol (TCP) to download large multimedia files from remote servers (e.g. P2P file sharing). TCP has been advertised as a fair-share protocol. However, when session round-trip-times (RTTs) radically differ from each other, the share (of the bottleneck link) may be anything but fair. This motivates us to explore a new TCP, TCP Libra, that guarantees fair sharing regardless of RTT. TCP Libra is source only based and thus easy to deploy. Via analytic modeling and simulations we show that TCP Libra achieves fairness while maintaining efficiency and friendliness to TCP New Reno. A comparison with other TCP versions that have been reported as RTT-fair in the literature is also carried out

    Selecting the Buffer Size for an IP Network Link

    Get PDF
    In this paper, we revisit the problem of selecting the buffer size for an IP network link. After a comprehensive overview of issues relevant to the link buffer sizing, we examine usefulness of existing guidelines for choosing the buffer size. Our analysis shows that the existing recommendations not only are difficult to implement in the context of IP networks but also can severely hurt interactive distributed applications. Then, we argue that the networking research community should change its way of thinking about the link buffer sizing problem: the focus should shift from optimizing performance for applications of a particular type to maximizing diversity of application types that IP networks can support effectively. To achieve this new objective, we propose using small buffers for IP network links

    Possible Paradigm Shifts in Broadband Policy

    Get PDF
    Debates over Internet policy tend to be framed by the way the Internet existed in the mid-1990s, when the Internet first became a mass-market phenomenon. At the risk of oversimplifying, the Internet was initially used by academics and tech-savvy early adopters to send email and browse the web over a personal computer connected to a telephone line via networks interconnected through in a limited way. Since then, the Internet has become much larger and more diverse in terms of users, applications, technologies, and business relationships. More recently, Internet growth has begun to slow both in terms of the number of connections and overall traffic. The major exception to this pattern is wireless, which has exhibited accelerating growth and has begun consistently to provide speeds in excess of 10 Mbps. Moreover, the emergence of the smartphone provides the most recent example of how changes in collateral technologies can play a key role in transforming network usage. These changes underscore that the Internet may be undergoing a paradigm shift and that generalizing from the past serves little purpose when circumstances have materially changed. Furthermore, policymakers should avoid regulating based on any particular vision of the technological future. Instead, they should craft policies designed to preserve room for experimentation with different approaches, which will require tolerating a significant degree of nonuniformity, uncertainty, and disruption

    Smartacking: Improving TCP Performance from the Receiving End

    Get PDF
    We present smartacking, a technique that improves performance of Transmission Control Protocol (TCP) via adaptive generation of acknowledgments (ACKs) at the receiver. When the bottleneck link is underutilized, the receiver transmits an ACK for each delivered data segment and thereby allows the connection to acquire the available capacity promptly. When the bottleneck link is at its capacity, the smartacking receiver sends ACKs with a lower frequency reducing the control traffic overhead and slowing down the congestion window growth to utilize the network capacity more effectively. To promote quick deployment of the technique, our primary implementation of smartacking modifies only the receiver. This implementation estimates the sender\u27s congestion window using a novel algorithm of independent interest. We also consider different implementations of smartacking where the receiver relies on explicit assistance from the sender or network. Our experiments for a wide variety of settings show that TCP performance can substantially benefit from smartacking, especially in environments with low levels of connection multiplexing on bottleneck links. Whereas our extensive evaluation reveals no scenarios where the technique undermines the overall performance, we believe that smartacking represents a promising direction for enhancing TCP

    Comparison Of Active Queue Management Techniques To Be Used In Intserv Implementation

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2004Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2004Aktif Kuyruk Yönetimi tekniği, TCP’nin sondan düşürmeli kuyruklar üzerinde başarım kısıtlarını gidermek üzere önerilmiştir. Aktif kuyruk yönetimi teknikleri, kuyruk taşmadan, paket kayıpları henüz olmadan, TCP uçlarını erken uyarma fikri üzerine kuruludur. Flow Random Early Drop, Green, Stochastic Fair Blue, Stabilized RED, akış bazlı aktif kuyruk yönetimi teknikleridir. Tezin ana amacı, NS benzetim aracını kullanarak, FRED, GREEN, SFB, ve SRED tekniklerinin karşılaştırmalı bir başarım analizini sunmaktır. Farklı ağ ve trafik kurguları kullanarak, ortalama kuyruk boyu, eşit kaynak kullanımı, iletim hızı, ve paket kayıp oranı bazında detaylı başarım analizleri yapılmıştır. Bu denli karşılaştırmalı bir çalışmanın, TCP/IP tıkanıklık denetimi için kullanılan akış bazlı aktif kuyruk yönetimi tekniklerinin daha iyi anlaşılmasını sağlayacağını, ve farklı durumlarda doğru teknik seçimine yardımcı olacağını düşünüyoruz.Active Queue Management is recommended to overcome the performance limitations of TCP congestion control mechanisms over drop-tail networks. Flow Random Early Drop, Green, Stochastic Fair Blue, Stabilized RED are some of the active queue management algorithms which are flow-based in nature. The main objective of this thesis is to present a comparative analysis of the performance of the FRED, GREEN, SFB, and SRED algorithms using the NS Network Simulator. This simulation tool is utilized to conduct comprehensive analysis on the performance of the algorithms in terms of average queue size, fairness, utilization and packet loss rate under different network topologies and traffic patterns. We believe a comparative study of this kind can provide a better understanding of these flow-based active queue management algorithms proposed for TCP/IP congestion control and makes the life easier in deciding the appropriate algorithm to deploy.Yüksek LisansM.Sc

    A Comparison of Poisson and Uniform Sampling for Active Measurements

    Full text link

    Floor the Ceil & Ceil the Floor: Revisiting AIMD Evaluation

    Get PDF
    Additive Increase Multiplicative Decrease (AIMD) is a widely used congestion control algorithm that is known to be fair and efficient in utilizing the network resources. In this paper, we revisit the performance of the AIMD algorithm under realistic conditions by extending the seminal model of Chui~\etal. We show that under realistic conditions the fairness and efficiency of AIMD is sensitive to changes in network conditions. Surprisingly, the root cause of this sensitivity comes from the way the congestion window is rounded during a multiplicative decrease phase. For instance, the floor function is often used to round the congestion window value because either kernel implementations or protocol restrictions mandate to use integers to maintain system variables. To solve the sensitivity issue, we provide a simple solution that is to alternatively use the floor and ceiling functions in the computation of the congestion window during a multiplicative decrease phase, when the congestion window size is an odd number. We observe that with our solution the efficiency improves and the fairness becomes one order of magnitude less sensitive to changes in network conditions

    On the Dynamics and Significance of Low Frequency Components of Internet Load

    Get PDF
    Dynamics of Internet load are investigated using statistics of round-trip delays, packet losses and out-of-order sequence of acknowledgments. Several segments of the Internet are studied. They include a regional network (the Jon von Neumann Center Network), a segment of the NSFNet backbone and a cross-country network consisting of regional and backbone segments. Issues addressed include: (a) dominant time scales in network workload; (b) the relationship between packet loss and different statistics of round-trip delay (average, minimum, maximum and standard-deviation); (c) the relationship between out of sequence acknowledgments and different statistics of delay; (d) the distribution of delay; (e) a comparison of results across different network segments (regional, backbone and cross-country); and (f) a comparison of results across time for a specific network segment. This study attempts to characterize the dynamics of Internet workload from an end-point perspective. A key conclusion from the data is that efficient congestion control is still a very difficult problem in large internetworks. Nevertheless, there are interesting signals of congestion that may be inferred from the data. Examples include (a) presence of slow oscillation components in smoothed network delay, (b) increase in conditional expected loss and conditional out-of-sequence acknowledgments as a function of various statistics of delay, (c) change in delay distribution parameters as a function of load, while the distribution itself remains the same, etc. The results have potential application in heuristic algorithms and analytical approximations for congestion control
    corecore