80 research outputs found

    A History of the Improvement of Internet Protocols Over Satellites using ACTS

    Get PDF
    This paper outlines the main results of a number of ACTS experiments on the efficacy of using standard Internet protocols over long-delay satellite channels. These experiments have been jointly conducted by NASA\u27s Glenn Research Center and Ohio University over the last six years. The focus of our investigations has been the impact of long-delay networks with non-zero bit-error rates on the performance of the suite of Internet protocols. In particular, we have focused on the most widely used transport protocol, the Transmission Control Protocol (TCP), as well as several application layer protocols. This paper presents our main results, as well as references to more verbose discussions of our experiments

    Study and Simulation of Enhancements for TCP (Transmission Control Protocol) Performance Over Noisy, High-Latency Links

    Get PDF
    The designers of the TCP/IP protocol suite explicitly included support of satellites in their design goals. The goal of the Internet Project was to design a protocol which could be layered over different networking technologies to allow them to be concatenated into an internet. The results of this project included two protocols, IP and TCP. IP is the protocol used by all elements in the network and it defines the standard packet format for IP datagrams. TCP is the end-to-end transport protocol commonly used between end systems on the Internet to derive a reliable bi-directional byte-pipe service from the underlying unreliable IP datagram service. Satellite links are explicitly mentioned in Vint Cerf's 2-page article which appeared in 1980 in CCR [2] to introduce the specifications for IP and TCP. In the past fifteen years, TCP has been demonstrated to work over many differing networking technologies, including over paths including satellites links. So if satellite links were in the minds of the designers from the beginning, what is the problem? The problem is that the performance of TCP has in some cases been disappointing. A goal of the authors of the original specification of TCP was to specify only enough behavior to ensure interoperability. The specification left a number of important decisions, in particular how much data is to be sent when, to the implementor. This was deliberately' done. By leaving performance-related decisions to the implementor, this would allow the protocol TCP to be tuned and adapted to different networks and situations in the future without the need to revise the specification of the protocol, or break interoperability. Interoperability would continue while future implementations would be allowed flexibility to adapt to needs which could not be anticipated at the time of the original protocol design

    Integration of Linux TCP and Simulation: Verification, Validation and Application

    Get PDF
    Network simulator has been acknowledged as one of the most flexible means in studying and developing protocol as it allows virtually endless numbers of simulated network environments to be setup and protocol of interest to be fine-tuned without requiring any real-world complicated and costly network experiment. However, depending on researchers, the same protocol of interest can be developed in different ways and different implementations may yield the outcomes that do not accurately capture the dynamics of the real protocol. In the last decade, TCP, the protocol on which the Internet is based, has been extensively studied in order to study and reevaluate its performance particularly when TCP based applications and services are deployed in an emerging Next Generation Network (NGN) and Next Generation Internet (NGI). As a result, to understand the realistic interaction of TCP with new types of networks and technologies, a combination of a real-world TCP and a network simulator seems very essential. This work presents an integration of real-world TCP implementation of Linux TCP/IP network stack into a network simulator, called INET. Moreover, verification and validation of the integrated Linux TCP are performed within INET framework to ensure the validity of the integration. The results clearly confirm that the integrated Linux TCP displays reasonable and consistent dynamics with respect to the behaviors of the real-world Linux TCP. Finally, to demonstrate the application of the INET with Linux TCP extension, algorithms of other Linux TCP variants and their dynamic over a large-bandwidth long-delay network are briefly presented

    Improved congestion control for packet switched data networks and the Internet

    Get PDF
    Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. This thesis is a step in the direction of improved network congestion control. Traditionally the Internet has adopted a best effort policy while relying on an end-to-end mechanism. Complex functions are implemented by end users, keeping the core routers of network simple and scalable. This policy also helps in updating the software at the users' end. Thus, currently most of the functionality of the current Internet lie within the end users' protocols, particularly within Transmission Control Protocol (TCP). This strategy has worked fine to date, but networks have evolved and the traffic volume has increased many fold; hence routers need to be involved in controlling traffic, particularly during periods of congestion. Other benefits of using routers to control the flow of traffic would be facilitating the introduction of differentiated services or offering different qualities of service to different users. Any real congestion episode due to demand of greater than available bandwidth, or congestion created on a particular target host by computer viruses, will hamper the smooth execution of the offered network services. Thus, the role of congestion control mechanisms in modern computer networks is very crucial. In order to find effective solutions to congestion control, in this thesis we use feedback control system models of computer networks. The closed loop formed by TCPIIP between the end hosts, through intermediate routers, relies on implicit feedback of congestion information through returning acknowledgements. This feedback information about the congestion state of the network can be in the form of lost packets, changes in round trip time and rate of arrival of acknowledgements. Thus, end hosts can either execute reactive or proactive congestion control mechanisms. The former approach uses duplicate acknowledgements and timeouts as congestion signals, as done in TCP Reno, whereas the latter approach depends on changes in the round trip time, as in TCP Vegas. The protocols employing the second approach are still in their infancy as they cannot co-exist safely with protocols employing the first approach. Whereas TCP Reno and its mutations, such as TCP Sack, are presently widely used in computer networks, including the current Internet. These protocols require packet losses to happen before they can detect congestion, thus inherently leading to wastage of time and network bandwidth. Active Queue Management (AQM) is an alternative approach which provides congestion feedback from routers to end users. It makes a network to behave as a sensitive closed loop feedback control system, with a response time of one round trip time, congestion information being delivered to the end host to reduce data sending rates before actual packets losses happen. From this congestion information, end hosts can reduce their congestion window size, thus pumping fewer packets into a congested network until the congestion period is over and routers stop sending congestion signals. Keeping both approaches in view, we have adopted a two-pronged strategy to address the problem of congestion control. They are to adapt the network at its edges as well as its core routers. We begin by introducing TCPIIP based computer networks and defining the congestion control problem. Next we look at different proactive end-to-end protocols, including TCP Vegas due to its better fairness properties. We address the incompatibility problem between TCP Vegas and TCP Reno by using ECN based on Random Early Detection (RED) algorithm to adjust parameters of TCP Vegas. Further, we develop two alternative algorithms, namely optimal minimum variance and generalized optimal minimum variance, for fair end-to-end protocols. The relationship between (p, 1) proportionally fair algorithm and the generalized algorithm is investigated along with conditions for its stable operation. Noteworthy is a novel treatment of the issue of transient fairness. This represents the work done on congestion control at the edges of network. Next, we focus on router-based congestion control algorithms and start with a survey of previous work done in that direction. We select the RED algorithm for further work due to it being recommended for the implementation of AQM. First we devise a new Hybrid RED algorithm which employs instantaneous queue size along with an exponential weighted moving average queue size for making decisions about packet marking/dropping, and adjusts the average value during periods of low traffic. This algorithm improves the link utilization and packet loss rate as compared to basic RED. We further propose a control theory based Auto-tuning RED algorithm that adapts to changing traffic load. This algorithm can clamp the average queue size to a desired reference value which can be used to estimate queuing delays for Quality of Service purposes. As an alternative approach to router-based congestion control, we investigate Proportional, Proportional-Integral (PI) and Proportional-Integral-Derivative (PID) principles based control algorithms for AQM. New control-theoretic RED and frequency response based PI and PID control algorithms are developed and their performance is compared with that of existing algorithms. Later we transform the RED and PI principle based algorithms into their adaptive versions using the well known square root of p formula. The performance of these load adaptive algorithms is compared with that of the previously developed fixed parameter algorithms. Apart from some recent research, most of the previous efforts on the design of congestion control algorithms have been heuristic. This thesis provides an effective use of control theory principles in the design of congestion control algorithms. We develop fixed-parameter-type feedback congestion control algorithms as well as their adaptive versions. All of the newly proposed algorithms are evaluated by using ns-based simulations. The thesis concludes with a number of research proposals emanating from the work reported

    Fade and interference mitigation and network congestion control

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 337-341).Optical communication through the atmospheric channel is commonly known as free-space optical (FSO) communication. When communicating through a clear FSO channel, not only is there atmospheric turbulence which results in fading of the received signal, but there may also be interference that scatters into the receiver and deteriorates performance. In this thesis, we consider mitigating the fading and interference with diversity coherent and diversity incoherent detection. We derive the performance of diversity coherent and diversity incoherent receivers in the presence of fading and various worst case interference types. Diversity coherent detection provides significant power gain over diversity direct detection, and most of the benefit can be achieved with a moderate amount of diversity. Moreover, diversity always improves the performance of coherent detection, whereas diversity improves the performance of direct detection only until an optimal diversity value, beyond which it degrades the performance. We also derive the improvement in expected outage length with diversity, and quantify the amount of interference that the system can handle while still achieving a given outage probability. Although signal fades or 'outages' in an FSO link can be mitigated on the Physical Layer, they cannot be completely eliminated. In a free-space optical network, these outages affect the performance and design of the Transport Layer. The effect of outages on the TCP sender is to diminish its throughput significantly due to drastic reduction of its rate when its packets do not get received through the outage. We consider a class of TCP-based protocols that is better suited for free-space optical networks. In particular, the protocols in this class have the sender distinguish whether a packet loss is due to an outage or congestion and not reduce its rate if the loss was due to an outage. We analyze, using an approximate channel model for FSO links, the maximum performance that can be achieved by a sender in this class, and compare the performance against a TCP sender's performance. The protocols in this class can gain back the performance loss in TCP due to link outages and they are particularly beneficial when the path has FSO links with strong turbulence and large bandwidth-delay product. We discuss a possible way to implement the distinguishing of packet loss due to congestion from packet loss due to link outage.by Etty J. Lee.Ph.D

    Congestion Control by Bandwidth-Delay Tradeoff in Very High-Speed Networks: The Case of Window-Based Control

    Get PDF
    Increasing bandwidth-delay product of high-speed wide-area networks is well-known to make conventional dynamic traffic control schemes sluggish . Still, most existing schemes employ dynamic control, among which TCP and ATM Forum\u27s rate-based flow control are prominent examples. So far, little has been investigated as to how the existing schemes will scale as bandwidth further increases up to gigabit speed and beyond. Our investigation in this paper is the first to show that dynamic control has a severe scalability problem with bandwidth increase, and to propose an entirely new approach to traffic control that overcomes the scalability problem. The essence of our approach is in exercising control in bandwidth domain rather than time domain, in order to avoid time delay in control. This requires more bandwidth than the timed counterpart, but achieves a much faster control. Furthermore, the bandwidth requirement is not excessively large because the bandwidth for smaller control delay and we call our approach Bandwidth-Latency Tradeoff (BLT). While the control in existing schemes are bound to delay, BLT is bound to bandwidth. As a fallout, BLT scales tied to bandwidth increase, rather than increasingly deteriorate as conventional schemes. Surprisingly, our approach begins to pay off much earlier than expected, even from a point where bandwidth-delay product is not so large. For instance, in a roughly AURORA-sized network, BLT far outperforms TCP on a shared 150Mbps link, where the bandwidth-delay product is around 60KB. In the other extreme where bandwidth-delay product is large, BLT outperforms TCP by as much as twenty times in terms of network power in a gigabit nationwide network. More importantly, BLT is designed to continue to scale with bandwidth increase and the performance gap is expected to widen further
    corecore