48 research outputs found

    Performance analysis of priority queueing systems in discrete time

    Get PDF
    The integration of different types of traffic in packet-based networks spawns the need for traffic differentiation. In this tutorial paper, we present some analytical techniques to tackle discrete-time queueing systems with priority scheduling. We investigate both preemptive (resume and repeat) and non-preemptive priority scheduling disciplines. Two classes of traffic are considered, high-priority and low-priority traffic, which both generate variable-length packets. A probability generating functions approach leads to performance measures such as moments of system contents and packet delays of both classes

    Discrete Time Analysis of Consolidated Transport Processes

    Get PDF
    Diese Arbeit beschäftigt sich mit der Entwicklung zeitdiskreter Modelle zur Analyse von Transportbündelungen. Mit den entwickelten Modellen für Bestands- und Fahrzeugbündelungen, insbesondere Milkrun-Systeme, kann eine detaillierte Leistungsbewertung in kurzer Zeit durchgeführt werden. Darüber hinaus erlauben die Modelle die Analyse der Umschlagslagerbündelungen, beispielweise Hub-und-Spoke-Netzwerke, indem sie im Rahmen einer Netzwerkanalyse mit einander verknüpft werden

    Analysis of limited-priority scheduling rules in discrete-time queues

    Get PDF

    Study on the Queue-Length Distribution in G

    Get PDF
    This paper analyzes a finite buffer size discrete-time Geo/G/1/N queue with multiple working vacations and different input rate. Using supplementary variable technique and embedded Markov chain method, the queue-length distribution solution in the form of formula at arbitrary epoch is obtained. Some performance measures associated with operating cost are also discussed based on the obtained queue-length distribution. Then, several numerical experiments follow to demonstrate the effectiveness of the obtained formulae. Finally, a state-dependent operating cost function is constructed to model an express logistics service center. Regarding the service rate during working vacation as a control variable, the optimization analysis on the cost function is carried out by using parabolic method

    Analysis of queueing models with batch service

    Get PDF
    This dissertation is the result of my research work at the SMACS research group (Department of Telecommunications and Information Processing, Ghent University) and it concerns the analysis of queueing models with batch service. A queueing model basically is a mathematical abstraction of a situation where customers arrive and queue up until they receive some kind of service. These phenomena are omnipresent in real life: people waiting at a counter of a post office or bank, people in the waiting room of a doctor, airplanes waiting to take off, people waiting until they get connected with the call center, data packets which are temporarily stored into a buffer until the transmisssion channel is available, et cetera. The analysis of queueing models constitutes the subject of the applied mathematical discipline called queueing theory and amounts to answering questions such as “How many customers are waiting on average?”, “How long do customers have to wait?”, “Is there a large variation on the waiting time?”, “What is the probability that data packets are lost due to a full buffer?”, “What is the probability that a customer suffers a lengthy delay?”, et cetera. In queueing theory, the number of customers and their waiting time are often denominated by respectively buffer content and customer delay. In addition, the probability that a quantity, such as the buffer content or customer delay, is very large or lengthy, is generally called a tail probability. The models we investigate throughout this dissertation have in common that customers can be served in batches, meaning that several customers can be served simultaneously. An elevator can be viewed as a classic example, as several people can be transported simultaneously to another floor. Also, in a variety of production or transport processes several goods can be processed together. Furthermore, in quality control, classification of items as good or bad can often be achieved more economically by examining the items in groups rather than individually. If the result of a group test is good, all items within it can then be classified as good, whereas one or more items are bad in the opposite case, where the items can then be retested by considering smaller groups. Group testing is especially of importance when the percentage of bad items is small. In addition, in telecommunications networks, packets with the same destination and quality of service (QoS) requirements are often aggregated into so-called bursts and these bursts are transmitted over the network. This is mainly done for efficiency reasons, since only one header per aggregated burst has to be constructed, instead of one header per single information unit, thus leading to an increased throughput. Technologies using packet aggregation include for instance Optical burst switched (OBS) networks and IEEE 802.11n wireless local area networks (WLANs). An inherent aspect of batch service is that newly arriving customers cannot join the ongoing service, even if there is free capacity (we denominate the maximum number of customers that can be served simultaneously by server capacity). For instance, an arriving person cannot enter an elevator that has just left, even if space is available. This person has to wait until the elevator has transported its occupants to their requested floors and has returned, which might take a long time in high buildings. In view of this, it is of importance to take a well-considered decision when the server becomes available and finds less customers than it can serve in theory. This decision is called the service policy. A whole spectrum of service policies exist. The server could, for instance, start serving the already present customers immediately. Although the present customers benefit from this approach, capacity is wasted: customers that arrive later cannot join the ongoing service. An alternative for this so-called immediate-batch service policy is the full-batch service policy. In this case, the available server postpones service until the number of present customers reaches or exceeds the server capacity, which, in turn, has a negative effect on the delay of the customers waiting to form a full batch (postponing delay). The threshold-based policy is a kind of compromise between immediate-batch service policy and full-batch service policy. When the number of present customers is below some service threshold, service is postponed, whereas service is initiated when the number of present customers reaches or exceeds this threshold. It is important to realize that even with this compromise, long postponing delays are possible. Therefore, in this dissertation, we combine a thresholdbased policy with a timer mechanism that avoids excessive postponing delays. The purpose of this dissertation is to calculate a large spectrum of performance measures, which enable to evaluate a broad set of situations with batch service and aid in selecting an efficient service policy. The studied performance measures are moments, such as the mean value and variance, and tail probabilities of the buffer content and the customer delay. This dissertation is structured as follows. In chapter 1, we motivate our work and we introduce crucial concepts such as probability generating functions (PGFs), whose useful properties are frequently relied upon throughout the analysis. Then we deduce moments and tail probabilities of the buffer content in chapter 2. The resulting formulas still contain unknown probabilities that have to be calculated numerically. As this might become unfeasible in some cases, we compute in chapter 3 approximations for the buffer content. Next, moments and tail probabilities of the customer delay are covered in respectively chapters 4 and 5. In order to analyze the moments, we conceive the customer delay as the sum of two non-overlapping parts, whereas for the tail probabilities, it turns out to be more convenient to interpret the delay as the maximum of two time periods. Further, in real life the customer arrival process often exhibits some kind of dependency. For instance, if a large amount of customers have recently arrived, it is likely that many customers arrive in the near future, as it might be an indication of a peak moment. Therefore, we investigate in chapter 6 the influence of dependency in the arrival process on the behaviour of batch-service phenomena and on the selection of an efficient service policy. Finally, the main contributions are summarized in chapter 7
    corecore