22,172 research outputs found
Batch Processor Sharing with Hyper-Exponential Service Time
We study Batch Processor-Sharing (BPS) queuing model with hyper-exponential
service time distribution and Poisson batch arrival process. One of the main
goals to study BPS is the possibility of its application in size-based
scheduling, which is used in differentiation between Short and Long flows in
the Internet. In the case of hyper-exponential service time distribution we
find an analytical expression of the expected conditional response time for the
BPS queue. We show, that the expected conditional response time is a concave
function of the service time. We apply the received results to the Two Level
Processor-Sharing (TLPS) model with hyper-exponential service time distribution
and find the expression of the expected response time for the TLPS model. TLPS
scheduling discipline can be applied to size-based differentiation in TCP/IP
networks and Web server request handling.Comment: Sophia Antipolis, France, 03 May 200
Bandwidth sharing with heterogeneous service requirements
We consider a system with two heterogeneous traffic classes. The users from both classes randomly generate service requests, one class having light-tailed properties, the other one exhibiting heavy-tailed characteristics. The heterogeneity in service requirements reflects the extreme variability in flow sizes observed in the Internet, with a vast majority of small transfers ('mice') and a limited number of exceptionally large flows ('elephants'). The active traffic flows share the available bandwidth in a Processor-Sharing (PS) fashion. The PS discipline has emerged as a natural paradigm for modeling the flow-level performance of bandwidth-sharing protocols like TCP. The number of simultaneously active traffic flows is limited by a threshold on the maximum system occupancy. We obtain the exact asymptotics of the transfer delays incurred by the users from the light-tailed class. The results show that the threshold mechanism significantly reduces the detrimen
Bandwidth sharing with heterogeneous service requirements
We consider a system with two heterogeneous traffic classes. The users from both classes randomly generate service requests, one class having light-tailed properties, the other one exhibiting heavy-tailed characteristics. The heterogeneity in service requirements reflects the extreme variability in flow sizes observed in the Internet, with a vast majority of small transfers ('mice') and a limited number of exceptionally large flows ('elephants'). The active traffic flows share the available bandwidth in a Processor-Sharing (PS) fashion. The PS discipline has emerged as a natural paradigm for modeling the flow-level performance of bandwidth-sharing protocols like TCP. The number of simultaneously active traffic flows is limited by a threshold on the maximum system occupancy. We obtain the exact asymptotics of the transfer delays incurred by the users from the light-tailed class. The results show that the threshold mechanism significantly reduces the detrimen
M/G/1/MLPS compared to M/G/1/PS
Multilevel Procesor Sharing scheduling disciplines have recently been resurrected in papers that focus on the differentiation between short and long TCP flows in the Internet. We prove that, for M/G/1 queues, such disciplines are better than the Processor Sharing discipline with respect to the mean delay whenever the hazard rate of the service time distribution is decreasing
A Network Congestion control Protocol (NCP)
The transmission control protocol (TCP) which is the dominant
congestion control protocol at the transport layer is proved to have
many performance problems with the growth of the Internet. TCP for
instance results in throughput degradation for high bandwidth delay
product networks and is unfair for flows with high round trip delays.
There have been many patches and modifications to TCP all of which
inherit the problems of TCP in spite of some performance improve-
ments.
On the other hand there are clean-slate design approaches of the
Internet. The eXplicit Congestion control Protocol (XCP) and the
Rate Control Protocol (RCP) are the prominent clean slate congestion
control protocols. Nonetheless, the XCP protocol is also proved to
have its own performance problems some of which are its unfairness
to long flows (flows with high round trip delay), and many per-packet
computations at the router. As shown in this paper RCP also makes
gross approximation to its important component that it may only give
the performance reports shown in the literature for specific choices of
its parameter values and traffic patterns.
In this paper we present a new congestion control protocol called
Network congestion Control Protocol (NCP). We show that NCP can
outperform both TCP, XCP and RCP in terms of among other things
fairness and file download times.unpublishe
An integrated packet/flow model for TCP performance analysis
Processor sharing (PS) models for TCP behavior nicely capture the bandwidth sharing and statistical multiplexing effect of TCP flows on the flow level. However, these ‘rough’ models do not provide insight into the impact of packet-level parameters (such as round trip time and buffer size) on, e.g., throughput and flow transfer times. This paper proposes an integrated packet/flow-level model: it exploits the advantages of PS approach on the flow level and, at the same time, it incorporates the most significant packet-level effects
Integration of streaming services and TCP data transmission in the Internet
We study in this paper the integration of elastic and streaming traffic on a
same link in an IP network. We are specifically interested in the computation
of the mean bit rate obtained by a data transfer. For this purpose, we consider
that the bit rate offered by streaming traffic is low, of the order of
magnitude of a small parameter \eps \ll 1 and related to an auxiliary
stationary Markovian process (X(t)). Under the assumption that data transfers
are exponentially distributed, arrive according to a Poisson process, and share
the available bandwidth according to the ideal processor sharing discipline, we
derive the mean bit rate of a data transfer as a power series expansion in
\eps. Since the system can be described by means of an M/M/1 queue with a
time-varying server rate, which depends upon the parameter \eps and process
(X(t)), the key issue is to compute an expansion of the area swept under the
occupation process of this queue in a busy period. We obtain closed formulas
for the power series expansion in \eps of the mean bit rate, which allow us to
verify the validity of the so-called reduced service rate at the first order.
The second order term yields more insight into the negative impact of the
variability of streaming flows
Real-time detection of grid bulk transfer traffic
The current practice of physical science research has yielded a continuously growing demand for interconnection network bandwidth to support the sharing of large datasets. Academic research networks and internet service providers have provisioned their networks to handle this type of load, which generates prolonged, high-volume traffic between nodes on the network. Maintenance of QoS for all network users demands that the onset of these (Grid bulk) transfers be detected to enable them to be reengineered through resources specifically provisioned to handle this type of traffic. This paper describes a real-time detector that operates at full-line-rate on Gb/s links, operates at high connection rates, and can track the use of ephemeral or non-standard ports
Multiplexing regulated traffic streams: design and performance
The main network solutions for supporting QoS rely on traf- fic policing (conditioning, shaping). In particular, for IP networks the IETF has developed Intserv (individual flows regulated) and Diffserv (only ag- gregates regulated). The regulator proposed could be based on the (dual) leaky-bucket mechanism. This explains the interest in network element per- formance (loss, delay) for leaky-bucket regulated traffic. This paper describes a novel approach to the above problem. Explicitly using the correlation structure of the sources’ traffic, we derive approxi- mations for both small and large buffers. Importantly, for small (large) buffers the short-term (long-term) correlations are dominant. The large buffer result decomposes the traffic stream in a stream of constant rate and a periodic impulse stream, allowing direct application of the Brownian bridge approximation. Combining the small and large buffer results by a concave majorization, we propose a simple, fast and accurate technique to statistically multiplex homogeneous regulated sources. To address heterogeneous inputs, we present similarly efficient tech- niques to evaluate the performance of multiple classes of traffic, each with distinct characteristics and QoS requirements. These techniques, applica- ble under more general conditions, are based on optimal resource (band- width and buffer) partitioning. They can also be directly applied to set GPS (Generalized Processor Sharing) weights and buffer thresholds in a shared resource system
- …