5,702 research outputs found

    Propagation of epistemic uncertainty in queueing models with unreliable server using chaos expansions

    Full text link
    In this paper, we develop a numerical approach based on Chaos expansions to analyze the sensitivity and the propagation of epistemic uncertainty through a queueing systems with breakdowns. Here, the quantity of interest is the stationary distribution of the model, which is a function of uncertain parameters. Polynomial chaos provide an efficient alternative to more traditional Monte Carlo simulations for modelling the propagation of uncertainty arising from those parameters. Furthermore, Polynomial chaos expansion affords a natural framework for computing Sobol' indices. Such indices give reliable information on the relative importance of each uncertain entry parameters. Numerical results show the benefit of using Polynomial Chaos over standard Monte-Carlo simulations, when considering statistical moments and Sobol' indices as output quantities

    The MVA Priority Approximation

    Get PDF
    A Mean Value Analysis (MVA) approximation is presented for computing the average performance measures of closed-, open-, and mixed-type multiclass queuing networks containing Preemptive Resume (PR) and nonpreemptive Head-Of-Line (HOL) priority service centers. The approximation has essentially the same storage and computational requirements as MVA, thus allowing computationally efficient solutions of large priority queuing networks. The accuracy of the MVA approximation is systematically investigated and presented. It is shown that the approximation can compute the average performance measures of priority networks to within an accuracy of 5 percent for a large range of network parameter values. Accuracy of the method is shown to be superior to that of Sevcik's shadow approximation

    The pseudo-self-similar traffic model: application and validation

    Get PDF
    Since the early 1990¿s, a variety of studies has shown that network traffic, both for local- and wide-area networks, has self-similar properties. This led to new approaches in network traffic modelling because most traditional traffic approaches result in the underestimation of performance measures of interest. Instead of developing completely new traffic models, a number of researchers have proposed to adapt traditional traffic modelling approaches to incorporate aspects of self-similarity. The motivation for doing so is the hope to be able to reuse techniques and tools that have been developed in the past and with which experience has been gained. One such approach for a traffic model that incorporates aspects of self-similarity is the so-called pseudo self-similar traffic model. This model is appealing, as it is easy to understand and easily embedded in Markovian performance evaluation studies. In applying this model in a number of cases, we have perceived various problems which we initially thought were particular to these specific cases. However, we recently have been able to show that these problems are fundamental to the pseudo self-similar traffic model. In this paper we review the pseudo self-similar traffic model and discuss its fundamental shortcomings. As far as we know, this is the first paper that discusses these shortcomings formally. We also report on ongoing work to overcome some of these problems

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    A tool for model-checking Markov chains

    Get PDF
    Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen-Twente Markov Chain Checker EÎMC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of EÎMC2

    Many-server queues with customer abandonment: numerical analysis of their diffusion models

    Full text link
    We use multidimensional diffusion processes to approximate the dynamics of a queue served by many parallel servers. The queue is served in the first-in-first-out (FIFO) order and the customers waiting in queue may abandon the system without service. Two diffusion models are proposed in this paper. They differ in how the patience time distribution is built into them. The first diffusion model uses the patience time density at zero and the second one uses the entire patience time distribution. To analyze these diffusion models, we develop a numerical algorithm for computing the stationary distribution of such a diffusion process. A crucial part of the algorithm is to choose an appropriate reference density. Using a conjecture on the tail behavior of a limit queue length process, we propose a systematic approach to constructing a reference density. With the proposed reference density, the algorithm is shown to converge quickly in numerical experiments. These experiments also show that the diffusion models are good approximations for many-server queues, sometimes for queues with as few as twenty servers

    Queuing theory-based latency/power tradeoff models for replicated search engines

    Get PDF
    Large-scale search engines are built upon huge infrastructures involving thousands of computers in order to achieve fast response times. In contrast, the energy consumed (and hence the financial cost) is also high, leading to environmental damage. This paper proposes new approaches to increase energy and financial savings in large-scale search engines, while maintaining good query response times. We aim to improve current state-of-the-art models used for balancing power and latency, by integrating new advanced features. On one hand, we propose to improve the power savings by completely powering down the query servers that are not necessary when the load of the system is low. Besides, we consider energy rates into the model formulation. On the other hand, we focus on how to accurately estimate the latency of the whole system by means of Queueing Theory. Experiments using actual query logs attest the high energy (and financial) savings regarding current baselines. To the best of our knowledge, this is the first paper in successfully applying stationary Queueing Theory models to estimate the latency in a large-scale search engine
    corecore