2,497 research outputs found
On the Modeling of TCP Latency and Throughput
In this thesis, a new model for the slow start phase based on the discrete evolutions of congestion window is developed, and we integrate this part into the improved TCP steady state model for a better prediction performance. Combining these short and steady state models, we propose an extensive stochastic model which can accurately predict the throughput and latency of the TCP connections as functions of loss rate, round-trip time (RTT), and file size. We validate our results through simulation experiments. The results show that our model?s predictions match the simulation results better than the Padhye and Cardwell\u27s stochastic models, about 75% improvement in the accuracy of performance predictions for the steady state and 20% improvement for the short-lived TCP flows
Long-Haul TCP vs. Cascaded TCP
In this work, we investigate the bandwidth and transfer time of long-haul TCP versus
cascaded TCP [5].
First, we discuss the models for TCP throughput. For TCP flows in support of
bulk data transfer (i.e., long-lived TCP flows), the TCP throughput models have been
derived [2, 3]. These models rely on the congestion-avoidance algorithm of TCP.
Though these models cannot be applied with short-lived TCP connections, our interest
relative to logistical networking is in longer-lived TCP flows anyway, specifically TCP
flows that spend significantly more time in the steady-state congestion-avoidance phase
rather than the transient slow-start phase. However, in the case where short-lived TCP
connections must be modeled, several TCP latency models have been proposed [1, 4]
and based on these latency models, the throughput and transfer time of short-lived TCP
connections are obtainable.
Using the above models, the transfer times for a data file of size S packets can be
computed for both long-haul TCP and cascaded TCP. The performance of both systems
is compared via their transfer times. One system is said to be preferred if its tranfer time
is lower than that of the other. Based on these performance comparisons, we develop a
decision model that decides whether to use the cascaded TCP or long-haul TCP
The Effect of Network and Infrastructural Variables on SPDY's Performance
HTTP is a successful Internet technology on top of which a lot of the web
resides. However, limitations with its current specification, i.e. HTTP/1.1,
have encouraged some to look for the next generation of HTTP. In SPDY, Google
has come up with such a proposal that has growing community acceptance,
especially after being adopted by the IETF HTTPbis-WG as the basis for
HTTP/2.0. SPDY has the potential to greatly improve web experience with little
deployment overhead. However, we still lack an understanding of its true
potential in different environments. This paper seeks to resolve these issues,
offering a comprehensive evaluation of SPDY's performance using extensive
experiments. We identify the impact of network characteristics and website
infrastructure on SPDY's potential page loading benefits, finding that these
factors are decisive for SPDY and its optimal deployment strategy. Through
this, we feed into the wider debate regarding HTTP/2.0, exploring the key
aspects that impact the performance of this future protocol
System Support for Bandwidth Management and Content Adaptation in Internet Applications
This paper describes the implementation and evaluation of an operating system
module, the Congestion Manager (CM), which provides integrated network flow
management and exports a convenient programming interface that allows
applications to be notified of, and adapt to, changing network conditions. We
describe the API by which applications interface with the CM, and the
architectural considerations that factored into the design. To evaluate the
architecture and API, we describe our implementations of TCP; a streaming
layered audio/video application; and an interactive audio application using the
CM, and show that they achieve adaptive behavior without incurring much
end-system overhead. All flows including TCP benefit from the sharing of
congestion information, and applications are able to incorporate new
functionality such as congestion control and adaptive behavior.Comment: 14 pages, appeared in OSDI 200
Differentiated Predictive Fair Service for TCP Flows
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service.
In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.National Science Foundation (CAREER ANI-0096045, MRI EIA-9871022
Best effort measurement based congestion control
Abstract available: p.
Re-designing Dynamic Content Delivery in the Light of a Virtualized Infrastructure
We explore the opportunities and design options enabled by novel SDN and NFV
technologies, by re-designing a dynamic Content Delivery Network (CDN) service.
Our system, named MOSTO, provides performance levels comparable to that of a
regular CDN, but does not require the deployment of a large distributed
infrastructure. In the process of designing the system, we identify relevant
functions that could be integrated in the future Internet infrastructure. Such
functions greatly simplify the design and effectiveness of services such as
MOSTO. We demonstrate our system using a mixture of simulation, emulation,
testbed experiments and by realizing a proof-of-concept deployment in a
planet-wide commercial cloud system.Comment: Extended version of the paper accepted for publication in JSAC
special issue on Emerging Technologies in Software-Driven Communication -
November 201
The Motivation, Architecture and Demonstration of Ultralight Network Testbed
In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the
Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early
results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we
achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis
- …