2,431 research outputs found

    An Improved Link Model for Window Flow Control and Its Application to FAST TCP

    Get PDF
    This paper presents a link model which captures the queue dynamics in response to a change in a transmission control protocol (TCP) source's congestion window. By considering both self-clocking and the link integrator effect, the model generalizes existing models and is shown to be more accurate by both open loop and closed loop packet level simulations. It reduces to the known static link model when flows' round trip delays are identical, and approximates the standard integrator link model when there is significant cross traffic. We apply this model to the stability analysis of fast active queue management scalable TCP (FAST TCP) including its filter dynamics. Under this model, the FAST control law is linearly stable for a single bottleneck link with an arbitrary distribution of round trip delays. This result resolves the notable discrepancy between empirical observations and previous theoretical predictions. The analysis highlights the critical role of self-clocking in TCP stability, and the proof technique is new and less conservative than existing ones

    Delay-oriented active queue management in TCP/IP networks

    Get PDF
    PhDInternet-based applications and services are pervading everyday life. Moreover, the growing popularity of real-time, time-critical and mission-critical applications set new challenges to the Internet community. The requirement for reducing response time, and therefore latency control is increasingly emphasized. This thesis seeks to reduce queueing delay through active queue management. While mathematical studies and research simulations reveal that complex trade-off relationships exist among performance indices such as throughput, packet loss ratio and delay, etc., this thesis intends to find an improved active queue management algorithm which emphasizes delay control without trading much on other performance indices such as throughput and packet loss ratio. The thesis observes that in TCP/IP network, packet loss ratio is a major reflection of congestion severity or load. With a properly functioning active queue management algorithm, traffic load will in general push the feedback system to an equilibrium point in terms of packet loss ratio and throughput. On the other hand, queue length is a determinant factor on system delay performance while has only a slight influence on the equilibrium. This observation suggests the possibility of reducing delay while maintaining throughput and packet loss ratio relatively unchanged. The thesis also observes that queue length fluctuation is a reflection of both load changes and natural fluctuation in arriving bit rate. Monitoring queue length fluctuation alone cannot distinguish the difference and identify congestion status; and yet identifying this difference is crucial in finding out situations where average queue size and hence queueing delay can be properly controlled and reasonably reduced. However, many existing active queue management algorithms only monitor queue length, and their control policies are solely based on this measurement. In our studies, our novel finding is that the arriving bit rate distribution of all sources contains information which can be a better indication of congestion status and has a correlation with traffic burstiness. And this thesis develops a simple and scalable way to measure its two most important characteristics, namely the mean ii and the variance of the arriving rate distribution. The measuring mechanism is based on a Zombie List mechanism originally proposed and deployed in Stabilized RED to estimate the number of flows and identify misbehaving flows. This thesis modifies the original zombie list measuring mechanism, makes it capable of measuring additional variables. Based on these additional measurements, this thesis proposes a novel modification to the RED algorithm. It utilizes a robust adaptive mechanism to ensure that the system reaches proper equilibrium operating points in terms of packet loss ratio and queueing delay under various loads. Furthermore, it identifies different congestion status where traffic is less bursty and adapts RED parameters in order to reduce average queue size and hence queueing delay accordingly. Using ns-2 simulation platform, this thesis runs simulations of a single bottleneck link scenario which represents an important and popular application scenario such as home access network or SoHo. Simulation results indicate that there are complex trade-off relationships among throughput, packet loss ratio and delay; and in these relationships delay can be substantially reduced whereas trade-offs on throughput and packet loss ratio are negligible. Simulation results show that our proposed active queue management algorithm can identify circumstances where traffic is less bursty and actively reduce queueing delay with hardly noticeable sacrifice on throughput and packet loss ratio performances. In conclusion, our novel approach enables the application of adaptive techniques to more RED parameters including those affecting queue occupancy and hence queueing delay. The new modification to RED algorithm is a scalable approach and does not introduce additional protocol overhead. In general it brings the benefit of substantially reduced delay at the cost of limited processing overhead and negligible degradation in throughput and packet loss ratio. However, our new algorithm is only tested on responsive flows and a single bottleneck scenario. Its effectiveness on a combination of responsive and non-responsive flows as well as in more complicated network topology scenarios is left for future work

    The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena

    Full text link
    The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex System

    Study on the Performance of TCP over 10Gbps High Speed Networks

    Get PDF
    Internet traffic is expected to grow phenomenally over the next five to ten years. To cope with such large traffic volumes, high-speed networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for packet forwarding and transmission inside the high-speed networks seems to be the most promising way to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few dozen packets of integrated all-optical buffers. On the other hand, many high-speed networks depend on the TCP/IP protocol for reliability which is typically implemented in software and is sensitive to buffer size. For example, TCP requires a buffer size of bandwidth delay product in switches/routers to maintain nearly 100\% link utilization. Otherwise, the performance will be much downgraded. But such large buffer will challenge hardware design and power consumption, and will generate queuing delay and jitter which again cause problems. Therefore, improve TCP performance over tiny buffered high-speed networks is a top priority. This dissertation studies the TCP performance in 10Gbps high-speed networks. First, a 10Gbps reconfigurable optical networking testbed is developed as a research environment. Second, a 10Gbps traffic sniffing tool is developed for measuring and analyzing TCP performance. New expressions for evaluating TCP loss synchronization are presented by carefully examining the congestion events of TCP. Based on observation, two basic reasons that cause performance problems are studied. We find that minimize TCP loss synchronization and reduce flow burstiness impact are critical keys to improve TCP performance in tiny buffered networks. Finally, we present a new TCP protocol called Multi-Channel TCP and a new congestion control algorithm called Desynchronized Multi-Channel TCP (DMCTCP). Our algorithm implementation takes advantage of a potential parallelism from the Multi-Path TCP in Linux. Over an emulated 10Gbps network ruled by routers with only a few dozen packets of buffers, our experimental results confirm that bottleneck link utilization can be much better improved by DMCTCP than by many other TCP variants. Our study is a new step towards the deployment of optical packet switching/routing networks

    Greediness control algorithm for multimedia streaming in wireless local area networks

    Get PDF
    This work investigates the interaction between the application and transport layers while streaming multimedia in a residential Wireless Local Area Network (WLAN). Inconsistencies have been identified that can have a severe impact on the Quality of Experience (QoE) experienced by end users. This problem arises as a result of the streaming processes reliance on rate adaptation engines based on congestion avoidance mechanisms, that try to obtain as much bandwidth as possible from the limited network resources. These upper transport layer mechanisms have no knowledge of the media which they are carrying and as a result treat all traffic equally. This lack of knowledge of the media carried and the characteristics of the target devices results in fair bandwidth distribution at the transport layer but creates unfairness at the application layer. This unfairness mostly affects user perceived quality when streaming high quality multimedia. Essentially, bandwidth that is distributed fairly between competing video streams at the transport layer results in unfair application layer video quality distribution. Therefore, there is a need to allow application layer streaming solutions, tune the aggressiveness of transport layer congestion control mechanisms, in order to create application layer QoE fairness between competing media streams, by taking their device characteristics into account. This thesis proposes the Greediness Control Algorithm (GCA), an upper transport layer mechanism that eliminates quality inconsistencies caused by rate / congestion control mechanisms while streaming multimedia in wireless networks. GCA extends an existing solution (i.e. TCP Friendly Rate Control (TFRC)) by introducing two parameters that allow the streaming application to tune the aggressiveness of the rate estimation and as a result, introduce fair distribution of quality at the application layer. The thesis shows that this rate adaptation technique, combined with a scalable video format allows increased overall system QoE. Extensive simulation analysis demonstrate that this form of rate adaptation increases the overall user QoE achieved via a number of devices operating within the same home WLAN

    Unicast UDP Usage Guidelines for Application Designers

    Get PDF
    Publisher PD

    An Experimental Investigation of TCP Performance in High Bandwidth-Delay Product Paths.

    Get PDF
    The performance of the Internet is determined not only by the network and hardware technologies that underlie it, but also by the software protocols that govern its use. In particular, the TCP transport protocol is responsible for carrying the great majority of traffic in the current internet, including web traffic, email, file transfers, music and video downloads. TCP provides two main functions. First, it provides functionality to detect and retransmit packets lost during a transfer thereby providing a reliable transport service to higher layer applications. Second, it enforces congestion control. That is, it seeks to match the rate at which packets are injected into the network to the available network capacity. A particular aim here is to avoid so-called congestion collapse, prevalent in the late 1980s prior to the inclusion of congestion control functionality in TCP. Over the last decade or so, the link speeds within networks have increased by several orders of magnitude. While the TCP congestion control algorithm has proved remarkably successful, it is now recognised that its performance is poor on paths with high bandwidth-delay product, e.g. see [13, 8, 14, 26, 12] and references therein. With the increasing prevalence of high speed links, this issue is becoming of widespread concern. This is reflected, for example, in the fact that the Linux operating system now employs an experimental algorithm called BIC-TCP[26] while Microsoft are actively studying new algorithms such as Compound-TCP[25]. While a number of proposals have been made to modify the TCP congestion control algorithm, all of these are still experimental and pending evaluation as they change the congestion control in new and significant ways and their effects on the network are not well understood. In fact, the basic properties of networks employing these algorithms may be very different to networks of standard TCP flows. The aim of this thesis is to address, in part, this basic observation
    • 

    corecore