61 research outputs found

    STCP: A New Transport Protocol for High-Speed Networks

    Get PDF
    Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP)

    Design of Feedback Controls Supporting TCP Based on the State–Space Approach

    Get PDF
    This paper investigates how to design feedback controls supporting transmission control protocol (TCP) based on the state-space approach for the linearized system of the well-known additive increase multiplicative decrease (AIMD) dynamic model. We formulate the feedback control design problem as state-space models without assuming its structure in advance. Thereby, we get three results that have not been observed by previous studies on the congestion control problem. 1) In order to fully support TCP, we need a proportional-derivative (PD)-type state-feedback control structure in terms of queue length (or RTT: round trip time). This backs up the conjecture in the networking literature that the AQM RED is not enough to control TCP dynamic behavior, where RED can be classified as a P-type AQM (or as an output feedback control for the linearized AIMD model). 2) In order to fully support TCP in the presence of delays, we derive delay-dependent feedback control structures to compensate for delays explicitly under the assumption that RTT, capacity and number of sources are known, where all existing AQMs including RED, REM/PI and AVQ are delay-independent controls. 3) In an attempt to interpret different AQM structures in a unified manner rather than to compare them via simulations, we propose a PID-type mathematical framework using integral control action. As a performance index to measure the deviation of the closed-loop system from an equilibrium point, we use a linear quadratic (LQ) cost of the transients of state and control variables such as queue length, aggregate rate, jitter in the aggregate rate, and congestion measure. Stabilizing gains of the feedback control structures are obtained minimizing the LQ cost. Then, we discuss the impact of the control structure on performance using the PID-type mathematical framework. All results are extended to the case of multiple links and heterogeneous delays

    Modeling TCP Throughput: an Elaborated Large-Deviations-Based Model and its Empirical Validation *

    Get PDF
    Abstract In today's Internet, a large part of the traffic is carried using the TCP transport protocol. Characterization of the variations of TCP traffic is thus a major challenge, both for resource provisioning and Quality of Service purposes. However, most existing models are limited to the prediction of the (almost-sure) mean TCP throughput and are unable to characterize deviations from this value. In this paper, we propose a method to describe the deviations of a long TCP flow's throughput from its almost-sure mean value. This method relies on an ergodic large-deviations result, which was recently proved to hold on almost every single realization for a large class of stochastic processes. Applying this result to a Markov chain modeling the congestion window's evolution of a long-lived TCP flow, we show that it is practically possible to quantify and to statistically bound the throughput's variations at different scales of interest for applications. Our Markov-chain model can take into account various network conditions and we demonstrate the accuracy of our method's prediction in different situations using simulations, experiments and real-world Internet traffic. In particular, in the classical case of Bernoulli losses, we demonstrate: i) the consistency of our method with the widely-used square-root formula predicting the almost-sure mean throughput, and ii) its ability to additionally predict finer properties reflecting the traffic variability at different scales

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Network Simulation Cradle

    Get PDF
    This thesis proposes the use of real world network stacks instead of protocol abstractions in a network simulator, bringing the actual code used in computer systems inside the simulator and allowing for greater simulation accuracy. Specifically, a framework called the Network Simulation Cradle is created that supports the kernel source code from FreeBSD, OpenBSD and Linux to make the network stacks from these systems available to the popular network simulator ns-2. Simulating with these real world network stacks reveals situations where the result differs significantly from ns-2's TCP models. The simulated network stacks are able to be directly compared to the same operating system running on an actual machine, making validation simple. When measuring the packet traces produced on a test network and in simulation the results are nearly identical, a level of accuracy previously unavailable using traditional TCP simulation models. The results of simulations run comparing ns-2 TCP models and our framework are presented in this dissertation along with validation studies of our framework showing how closely simulation resembles real world computers. Using real world stacks to simulate TCP is a complementary approach to using the existing TCP models and provides an extra level of validation. This way of simulating TCP and other protocols provides the network researcher or engineer new possibilities. One example is using the framework as a protocol development environment, which allows user-level development of protocols with a standard set of reproducible tests, the ability to test scenarios which are costly or impossible to build physically, and being able to trace and debug the protocol code without affecting results

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported

    Qos In Cognitive Packet Networks: Adaptive Routing, Flow And Congestion Control

    Get PDF
    With the emergence of various applications that have different Quality of Service (QoS) requirements, the capability of a network to support QoS becomes more and more important and necessary. This dissertation explores QoS in Cognitive Packet Networks (CPN) by using adaptive routing, flow and congestion control. We present a detailed description and analysis of our proposed routing algorithms based on single and multiple QoS constraints. An online estimation of packet loss rate over a path is introduced. We implement and evaluate the adaptive routing scheme in an experimental CPN test-bed. Our experiments support our claims that the users can achieve their desired best-effort QoS through this routing scheme. We also propose a QoS-based flow and congestion control scheme that is built in the transport layer and specially designed to work with CPN to support users\u27 QoS while remaining friendly to TCP. Theoretical models and experimental analysis are presented. Finally we experimentally demonstrate that the proposed flow and congestion control scheme can effectively control the input flows, react to the congestion and work with our proposed adaptive routing scheme to achieve users\u27 QoS
    corecore