2,688 research outputs found
FAST TCP: Motivation, Architecture, Algorithms, Performance
We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties which the current TCP implementation has at large windows. We describe the architecture and summarize some of the algorithms implemented in our prototype. We characterize its equilibrium and stability properties. We evaluate it experimentally in terms of throughput, fairness, stability, and responsiveness
SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network
Measurement shows that 85% of TCP flows in the internet are short-lived flows
that stay most of their operation in the TCP startup phase. However, many
previous studies indicate that the traditional TCP Slow Start algorithm does
not perform well, especially in long fat networks. Two obvious problems are
known to impact the Slow Start performance, which are the blind initial setting
of the Slow Start threshold and the aggressive increase of the probing rate
during the startup phase regardless of the buffer sizes along the path. Current
efforts focusing on tuning the Slow Start threshold and/or probing rate during
the startup phase have not been considered very effective, which has prompted
an investigation with a different approach. In this paper, we present a novel
TCP startup method, called threshold-less slow start or SSthreshless Start,
which does not need the Slow Start threshold to operate. Instead, SSthreshless
Start uses the backlog status at bottleneck buffer to adaptively adjust probing
rate which allows better seizing of the available bandwidth. Comparing to the
traditional and other major modified startup methods, our simulation results
show that SSthreshless Start achieves significant performance improvement
during the startup phase. Moreover, SSthreshless Start scales well with a wide
range of buffer size, propagation delay and network bandwidth. Besides, it
shows excellent friendliness when operating simultaneously with the currently
popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table
Endpoint-transparent Multipath Transport with Software-defined Networks
Multipath forwarding consists of using multiple paths simultaneously to
transport data over the network. While most such techniques require endpoint
modifications, we investigate how multipath forwarding can be done inside the
network, transparently to endpoint hosts. With such a network-centric approach,
packet reordering becomes a critical issue as it may cause critical performance
degradation.
We present a Software Defined Network architecture which automatically sets
up multipath forwarding, including solutions for reordering and performance
improvement, both at the sending side through multipath scheduling algorithms,
and the receiver side, by resequencing out-of-order packets in a dedicated
in-network buffer.
We implemented a prototype with commonly available technology and evaluated
it in both emulated and real networks. Our results show consistent throughput
improvements, thanks to the use of aggregated path capacity. We give
comparisons to Multipath TCP, where we show our approach can achieve a similar
performance while offering the advantage of endpoint transparency
Equilibrium of Heterogeneous Congestion Control: Optimality and Stability
When heterogeneous congestion control protocols
that react to different pricing signals share the same network,
the current theory based on utility maximization fails to predict
the network behavior. The pricing signals can be different types
of signals such as packet loss, queueing delay, etc, or different
values of the same type of signal such as different ECN marking
values based on the same actual link congestion level. Unlike in a
homogeneous network, the bandwidth allocation now depends on
router parameters and flow arrival patterns. It can be non-unique,
suboptimal and unstable. In Tang et al. (“Equilibrium of heterogeneous
congestion control: Existence and uniqueness,” IEEE/ACM
Trans. Netw., vol. 15, no. 4, pp. 824–837, Aug. 2007), existence and
uniqueness of equilibrium of heterogeneous protocols are investigated.
This paper extends the study with two objectives: analyzing
the optimality and stability of such networks and designing control
schemes to improve those properties. First, we demonstrate the
intricate behavior of a heterogeneous network through simulations
and present a framework to help understand its equilibrium
properties. Second, we propose a simple source-based algorithm
to decouple bandwidth allocation from router parameters and
flow arrival patterns by only updating a linear parameter in the
sources’ algorithms on a slow timescale. It steers a network to
the unique optimal equilibrium. The scheme can be deployed
incrementally as the existing protocol needs no change and only
new protocols need to adopt the slow timescale adaptation
ACTS 118x Final Report High-Speed TCP Interoperability Testing
With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The satellite system providers are interested in solving TCP efficiency problems associated with long delas and error-prone links. Similarly, the terrestrial community is interested in solving TCP problems over high-bandwidth links. Whereas the wireless community is intested in improving TCP performance over bandwidth constrained, error-prone links.
NASA realized that solutions had already been proposed for most of the problems associated with efficient data transfer over large bandwidth-delay links (which include satellite links). The solutions are detailed in various Internet Engineering Task Force (IETF) Request for Comments (RFCs). Unfortunately, most of these solutions had not been tested at high-speed (155+ Mbps). Therefore, the NASA\u27s ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine how far the protocol can be optimised over a 622 Mbps satellite link. These experiments were known as the 118i and 118j experiments.
During the 118i and 118j experiments, NASA worled closely with SUN Microsystems and FORE Systems to improve the operating system, TCP stacks, and network interface cards and drivers. We were able to obtain instantaneous data througput rates of greater than 529 Mbps and average throughput rates of 470 Mbps using TCP over Asynchronous Transfer Mode (ATM) over a 622 Mbps Synchronous Optical Network (SONET) OC12 link. Following the success of these experiments and the successful government/industry collaboration, a new series of experiments, the 118x experiments, were developed
STCP: A New Transport Protocol for High-Speed Networks
Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP)
Improving the performance of HTTP over high bandwidth-delay product circuits
As the WWW continues to grow, providing adequate bandwidth to countries remote from the geographic and topological center of the network, such as those in the Asia/Pacific, becomes more and more difficult. To meet the growing traffic needs of the Internet some Network Service Providers are deploying satellite connections. Through discrete event simulation of a real HTTP workload with differing international architectures this paper is able to give guidance on the architecture that should be deployed for long distance, high capacity Internet links.
We show that a significant increase in the time taken to fetch HTTP requests can be expected when traffic is moved from a long distance international terrestrial link to a satellite link. We then show several modifications to the network architecture that can be used to greatly improve the performance of a satellite link. These modifications include the use of an asymmetric satellite link, the multiplexing of multiple HTTP requests onto a single TCP connection and the use of HTTP1.1
- …