155 research outputs found
Unreliable Retrial Queues in a Random Environment
This dissertation investigates stability conditions and approximate steady-state performance measures for unreliable, single-server retrial queues operating in a randomly evolving environment. In such systems, arriving customers that find the server busy or failed join a retrial queue from which they attempt to regain access to the server at random intervals. Such models are useful for the performance evaluation of communications and computer networks which are characterized by time-varying arrival, service and failure rates. To model this time-varying behavior, we study systems whose parameters are modulated by a finite Markov process. Two distinct cases are analyzed. The first considers systems with Markov-modulated arrival, service, retrial, failure and repair rates assuming all interevent and service times are exponentially distributed. The joint process of the orbit size, environment state, and server status is shown to be a tri-layered, level-dependent quasi-birth-and-death (LDQBD) process, and we provide a necessary and sufficient condition for the positive recurrence of LDQBDs using classical techniques. Moreover, we apply efficient numerical algorithms, designed to exploit the matrix-geometric structure of the model, to compute the approximate steady-state orbit size distribution and mean congestion and delay measures. The second case assumes that customers bring generally distributed service requirements while all other processes are identical to the first case. We show that the joint process of orbit size, environment state and server status is a level-dependent, M/G/1-type stochastic process. By employing regenerative theory, and exploiting the M/G/1-type structure, we derive a necessary and sufficient condition for stability of the system. Finally, for the exponential model, we illustrate how the main results may be used to simultaneously select mean time customers spend in orbit, subject to bound and stability constraints
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Some aspects of traffic control and performance evaluation of ATM networks
The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation
Estimating probability distributions of dynamic queues
Queues are often associated with uncertainty or unreliability, which can arise from chance or climatic events, phase changes in system behaviour, or inherent randomness. Knowing the probability distribution of the number of customers in a queue is important for estimating the risk of stress or disruption to routine services and upstream blocking, potentially leading to exceeding critical limits, gridlock or incidents. The present paper focuses on time-varying queues produced by transient oversaturation during demand peaks where there is randomness in arrivals and service. The objective is to present practical methods for estimating a probability distribution from knowledge of the mean, variance and utilisation (degree of saturation) of a queue available from computationally efficient, if approximate, time-dependent calculation. This is made possible by a novel expression for time-dependent queue variance. The queue processes considered are those commonly used to represent isolated priority (M/M/1) and signal-like (M/D/1) systems, plus some statistical variations within the common Pollaczek-Khinchin framework. Results are verified by comparison with Markov simulation based on recurrence relations
Addressing traffic congestion and throughput through optimization.
Masters Degree. University of KwaZulu-Natal, Durban.Traffic congestion experienced in port precincts have become prevalent in recent years
for South Africa and internationally [1, 2, 3]. In addition to the environmental impacts
of air pollution due to this challenge, economic effects also weigh heavy on profit
margins with added fuel costs and time wastages. Even though there are many
common factors attributing to congestion experienced in port precincts and other areas,
operational inefficiencies due to slow productivity and lack of handling equipment to
service trucks in port areas are a major contributor [4, 5].
While there are several types of optimisation approaches to addressing traffic
congestion such as Queuing Theory [6], Genetic Algorithms [7], Ant Colony
Optimisation [8], Particle Swarm Optimisation [9], traffic congestion is modelled based
on congested queues making queuing theory most suited for resolving this problem.
Queuing theory is a discipline of optimisation that studies the dynamics of queues to
determine a more optimal route to reduce waiting times.
The use of optimisation to address the root cause of port traffic congestion has been
lacking with several studies focused on specific traffic zones that only address the
symptoms. In addition, research into traffic around port precincts have also been
limited to the road side with proposed solutions focusing on scheduling and
appointment systems [25, 56] or the sea-side focusing on managing vessel traffic
congestion [30, 31, 58]. The aim of this dissertation is to close this gap through the
novel design and development of Caudus, a smart queue solution that addresses traffic
congestion and throughput through optimization. The name “CAUDUS” is derived as
an anagram with Latin origins to mean “remove truck congestion”.
Caudus has three objective functions to address congestion in the port precinct, and
by extension, congestion in warehousing and freight logistics environments viz.
Preventive, Reactive and Predictive. The preventive objective function employs the use
of Little’s rule [14] to derive the algorithm for preventing congestion. Acknowledging
that congestion is not always avoidable, the reactive objective function addresses the
problem by leveraging Caudus’ integration capability with Intelligent Transport
Systems [65] in conjunction with other road-user network solutions. The predictive
objective function is aimed at ensuring the environment is incident free and provides
an early-warning detection of possible exceptions in traffic situations that may lead to
congestion. This is achieved using the derived algorithms from this study that identifies
bottleneck symptoms in one traffic zone where the root cause exists in an adjoining
traffic area.
The Caudus Simulation was developed in this study to test the derived algorithms
against the different congestion scenarios. The simulation utilises HTML5 and
JavaScript in the front-end GUI with the back-end having a SQL code base. The entire
simulation process is triggered using a series of multi-threaded batch programs to
mimic the real-world by ensuring process independence for the various simulation
activities. The results from the simulation demonstrates a significant reduction in the
vii
duration of congestion experienced in the port precinct. It also displays a reduction in
throughput time of the trucks serviced at the port thus demonstrating Caudus’ novel
contribution in addressing traffic congestion and throughput through optimisation.
These results were also published and presented at the International Conference on
Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD
2021) under the title “CAUDUS: An Optimisation Model to Reducing Port Traffic
Congestion” [84]
Performance Analysis of Multi-Server Queueing System Operating under Control of a Random Environment
- …