688 research outputs found
The Effect of Positive Correlations on Buffer Occupancy: Comparison and Lower Bounds via Supermodular Ordering
We use recent advances from the theory of multivariate stochastic orderings to formalize the "folk theorem" to the effect that positive correlations leads to increased buffer occupancy and larger buffer levels at a discrete time multiplexer queue of infinite capacity. We do so by comparing input sequences in the supermodular (sm) ordering and the corresponding buffer contents in the increasing convex (icx) ordering, respectively. Three popular classes of (discrete-time) traffic models are considered here, namely, the fractional Gaussian noise (FGN) traffic model, the on-off source model and the M|G|infinity traffic model. The independent version of an input process in each of these classes of traffic models is a member of the same class. We show that this independent version is smaller than the input sequence itself and that the corresponding buffer content processes are ordered in the same direction. For each traffic model, we show by simulations that the first and second moments of buffer levels are ordered in agreement with the comparison results. The more general version of the folk theorem, namely "the larger the positive correlations of input traffic, the higher the buffer occupancy levels" is established in some cases. For the FGN traffic models, we show that the process with higher Hurst parameter is larger than the process with smaller Hurst parameter. In the case of the M|G|infinity model, the effect of session-duration variability is discussed and the comparison result is obtained in the bivariate case
Introduction to Queueing Theory and Stochastic Teletraffic Models
The aim of this textbook is to provide students with basic knowledge of
stochastic models that may apply to telecommunications research areas, such as
traffic modelling, resource provisioning and traffic management. These study
areas are often collectively called teletraffic. This book assumes prior
knowledge of a programming language, mathematics, probability and stochastic
processes normally taught in an electrical engineering course. For students who
have some but not sufficiently strong background in probability and stochastic
processes, we provide, in the first few chapters, background on the relevant
concepts in these areas.Comment: 298 page
Some aspects of traffic control and performance evaluation of ATM networks
The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation
Asymptotic analysis by the saddle point method of the Anick-Mitra-Sondhi model
We consider a fluid queue where the input process consists of N identical
sources that turn on and off at exponential waiting times. The server works at
the constant rate c and an on source generates fluid at unit rate. This model
was first formulated and analyzed by Anick, Mitra and Sondhi. We obtain an
alternate representation of the joint steady state distribution of the buffer
content and the number of on sources. This is given as a contour integral that
we then analyze for large N. We give detailed asymptotic results for the joint
distribution, as well as the associated marginal and conditional distributions.
In particular, simple conditional limits laws are obtained. These shows how the
buffer content behaves conditioned on the number of active sources and vice
versa. Numerical comparisons show that our asymptotic results are very accurate
even for N=20
From burstiness characterisation to traffic control strategy : a unified approach to integrated broadbank networks
The major challenge in the design of an integrated network is the integration and
support of a wide variety of applications. To provide the requested performance
guarantees, a traffic control strategy has to allocate network resources according
to the characteristics of input traffic. Specifically, the definition of traffic characterisation
is significant in network conception. In this thesis, a traffic stream
is characterised based on a virtual queue principle. This approach provides the
necessary link between network resources allocation and traffic control.
It is difficult to guarantee performance without prior knowledge of the worst
behaviour in statistical multiplexing. Accordingly, we investigate the worst case
scenarios in a statistical multiplexer. We evaluate the upper bounds on the probabilities
of buffer overflow in a multiplexer, and data loss of an input stream. It is
found that in networks without traffic control, simply controlling the utilisation of
a multiplexer does not improve the ability to guarantee performance. Instead, the
availability of buffer capacity and the degree of correlation among the input traffic
dominate the effect on the performance of loss.
The leaky bucket mechanism has been proposed to prevent ATM networks from
performance degradation due to congestion. We study the leaky bucket mechanism
as a regulation element that protects an input stream. We evaluate the optimal
parameter settings and analyse the worst case performance. To investigate its effectiveness,
we analyse the delay performance of a leaky bucket regulated multiplexer.
Numerical results show that the leaky bucket mechanism can provide well-behaved
traffic with guaranteed delay bound in the presence of misbehaving traffic.
Using the leaky bucket mechanism, a general strategy based on burstiness characterisation,
called the LB-Dynamic policy, is developed for packet scheduling.
This traffic control strategy is closely related to the allocation of both bandwidth
and buffer in each switching node. In addition, the LB-Dynamic policy monitors
the allocated network resources and guarantees the network performance of each
established connection, irrespective of the traffic intensity and arrival patterns of
incoming packets. Simulation studies demonstrate that the LB-Dynamic policy is
able to provide the requested service quality for heterogeneous traffic in integrated
broadband networks
Buffer Engineering for M|G|infinity Input Processes
We suggest the input process as a viable model forrepresenting the heavy correlations observed in network traffic.Originally introduced by Cox, this model represents the busy-serverprocess of an queue with Poisson inputs and generalservice times distributed according to , and provides a large andversatile class of traffic models. We examine various properties ofthe process, focusing particularly on its richcorrelation structure. The process is shown to effectively portrayshort or long-range dependence simply by controlling the tail of thedistribution .In an effort to understand the dynamics of a system supporting traffic, we study the large buffer asymptotics of amultiplexer driven by an input process. Using the largedeviations framework developed by Duffield and O'Connell, weinvestigate the tail probabilities for the steady-state buffercontent. The key step in this approach is the identification of theappropriate large deviations scaling. This scaling is shown to beclosely related to the forward recurrence time of the service timedistribution, and a closed form expression is derived for thecorresponding limiting log-moment generating functionassociated with the input process. Three different regimes areidentified.The results are then applied to obtain the large bufferasymptotics under a variety of service time distributions. In eachcase, the derived asymptotics are compared with simulation results. While the general functional form of buffer asymptotics may be derivedvia large deviations techniques, direct arguments often provide a moreprecise description when the input traffic is heavily correlated.Even so, several significant inferences may be drawn from thefunctional dependencies of the tail buffer probabilities. Theasymptotics already indicate a sub-exponential behavior in the caseof heavily-correlated traffic, in sharp contrast to the geometricdecay usually observed for Markovian input streams. This difference,along with a shift in the explicit dependence of the asymptotics onthe input and output rates and , from when is exponential, to when issub--exponential, clearly delineates the heavy and light tailed cases.Finally, comparison with similar asymptotics for a different class ofinput processes indicates that buffer sizing cannot be adequatelydetermined by appealing solely to the short versus long-rangedependence characterization of the input model used
On the performance evaluation of multi-guarded marked graphs with single-server semantics
In discrete event systems, a given task can start executing when all the required input data are available. The required input data for a given task may change along the evolution of the system. A way of modeling this changing requirement is through multi-guarded tasks. This paper studies the performance evaluation of the class of marked graphs extended with multi-guarded transitions (or tasks). Although the throughput of such systems can be computed through Markov chain analysis, two alternative methods are proposed to avoid the state explosion problem. The first one obtains throughput bounds in polynomial time through linear programming. The second one yields a small subsystem that estimates the throughput of the whole system.Peer ReviewedPostprint (author's final draft
- …