245,883 research outputs found

    Efficient simulation of large deviation events for sums of random vectors using saddle-point representations

    Get PDF
    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed (i.i.d.), light-tailed and nonlattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in the literature where state-independent, exponential-twisting-based importance sampling has been shown to be asymptotically efficient and a more nuanced state-dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point-based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Furthermore, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to develop an asymptotically vanishing relative error estimator for the practically important expected overshoot of sums of i.i.d. random variables

    Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    Get PDF
    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queueing and financial credit risk modelling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Further, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to similarly efficiently estimate the practically important expected overshoot of sums of iid random variables

    Fast simulation of a queue fed by a superposition of many (heavy-tailed) sources

    Get PDF
    We consider a queue fed by a large number, say nn, of on-off sources with generally distributed on- and off-times. The queueing resources are scaled by nn: the buffer is BequivnbBequiv nb and link rate is CequivncCequiv nc. The model is versatile: it allows us to model both long range dependent traffic (by using heavy-tailed distributed on-periods) and short range dependent traffic (by using light-tailed on-periods). A crucial performance metric in this model is the steady-state buffer overflow probability. This overflow probability decays exponentially in the number of sources nn. Therefore, if the number of sources grows large, naive simulation is too time-consuming, and we have touse fast simulation techniques instead. Due to the exponential decay (in nn), importance sampling with an exponential change of measure essentially goes through, irrespective of the on-times being heavy-tailed or light-tailed. An asymptotically optimal change of measure is found by using large deviations arguments. Notably, the change of measure is not constant during the simulation run, which is essentially different from many other studies (usually relying on large buffer asymptotics). We provide numerical examples to show that the resulting importance sampling procedure indeed improves considerably over naive simulation. We present some accelerations. Finally, we give short comments on the influence of the shape of the distributions on the loss probability, and we describe the limitations of our technique

    Queueing networks : rare events and fast simulations

    Get PDF
    This monograph focuses on rare events. Even though they are extremely unlikely, they can still occur and then could have significant consequences. We mainly consider rare events in queueing networks. More precisely, we are interested in the probability of collecting some large number of jobs in the downstream queue of a two-node tandem network. We consider the Jackson network case, as well as a generalization, the so-called slowdown network. In practice these models can be used to model overflows in telecommunication networks. We chose these networks as a first step in developing a methodology that can be extended to more general networks. We investigate rare events from two different sides. On the one hand we are interested in the nature of the event, i.e., how the event ā€˜builds upā€™. At first we identify the structure of a specific path to overflow, which plays the role of our candidate for the most probable trajectory to overflow. We use some simple, but powerful large deviations based heuristics to this end. The shape of the trajectory crucially depends on both the starting state and the system parameters. We then provide a rigorous proof that this trajectory is indeed the most probable path to overflow. Thus our method combines simplicity (as it is easy to identify) and precision (as it is backed up by rigorous mathematical support). On the other hand our ultimate goal is to design accurate and efficient techniques to estimate the probability of our interest; in particular we aim for techniques that are asymptotically efficient, which effectively means that the number of replications needed to obtain an estimator with predetermined relative error grows sub-exponentially when the probability of interest decays exponentially. We present several importance sampling schemes based on the large deviations results. We begin with naıve, state-independent algorithms and end up with a family of simple and efficient state-dependent schemes. We also develop a multilevel splitting scheme, which turns out to be efficient for a wider class of processes. Strengths and weaknesses of the importance sampling schemes and multilevel splitting schemes are also discussed in this work

    State-dependent Importance Sampling for a Slow-down Tandem Queue

    Get PDF
    In this paper we investigate an advanced variant of the classical (Jackson) tandem queue, viz. a two-node system with server slow-down. The slow-down mechanism has the primary objective to protect the downstream queue from frequent overflows, and it does so by reducing the service speed of the upstream queue as soon as the number of jobs in the downstream queue reaches some pre-specified threshold. To assess the efficacy of such a policy, techniques are needed for evaluating overflow metrics of the second queue. We focus on the estimation of the probability of the following rare event: overflow in the downstream queue before exhausting the system, starting from any given state in the state space.\ud Due to the rarity of the event under consideration, naive, direct Monte Carlo simulation is often infeasible. We therefore rely on the application of importance sampling to obtain variance reduction. The principal contribution of this paper is that we construct an importance sampling scheme that is asymptotically efficient. In more detail, the paper addresses the following issues. (i) We rely on powerful heuristics to identify the exponential decay rate of the probability under consideration, and verify this result by applying sample-path large deviations techniques. (2) Immediately from these heuristics, we develop a proposal for a change of measure to be used in importance sampling. (3) We prove that the resulting algorithm is asymptotically efficient, which effectively means that the number of runs required to obtain an estimate with fixed precision grows subexponentially in the buffer size. We stress that our method to prove asymptotic efficiency is substantially shorter and more straightforward than those usually provided in the literature. Also our setting is more general than the situations analyzed so far, as we allow the process to start off at any state of the state space, and in addition we do not impose any conditions on the values of the arrival rate and service rates, as long as the underlying queueing system is stable

    Importance Sampling for multi-constraints rare event probability

    Full text link
    Improving Importance Sampling estimators for rare event probabilities requires sharp approx- imations of the optimal density leading to a nearly zero-variance estimator. This paper presents a new way to handle the estimation of the probability of a rare event defined as a finite intersection of subset. We provide a sharp approximation of the density of long runs of a random walk condi- tioned by multiples constraints, each of them defined by an average of a function of its summands as their number tends to infinity.Comment: Conference pape

    Simple and efficient importance sampling scheme for a tandem queue with server slow-down

    Get PDF
    This paper considers importance sampling as a tool for rare-event simulation. The system at hand is a so-called tandem queue with slow-down, which essentially means that the server of the first queue (or: upstreanm queue) switches to a lower speed when the second queue (downstream queue) exceeds some threshold. The goal is to assess to what extent such a policy succeeds in protecting the first queue, and therefore we focus on estimating the probability of overflow in the downstream queue.\ud It is known that in this setting importance sampling with traditional state-independent distributions performs poorly. More sophisticated state-dependent schemes can be shown to be asymptotically efficient, but their implementation may be problematic, as for each state the new measure has to be computed. This paper presents an algorithm that is considerably simpler than the fully state-dependent scheme; it requires low computational effort, but still has high efficiency
    • ā€¦
    corecore