1,689 research outputs found

    Semiparametric Cross Entropy for rare-event simulation

    Full text link
    The Cross Entropy method is a well-known adaptive importance sampling method for rare-event probability estimation, which requires estimating an optimal importance sampling density within a parametric class. In this article we estimate an optimal importance sampling density within a wider semiparametric class of distributions. We show that this semiparametric version of the Cross Entropy method frequently yields efficient estimators. We illustrate the excellent practical performance of the method with numerical experiments and show that for the problems we consider it typically outperforms alternative schemes by orders of magnitude

    Efficient Rare-Event Simulation for Multiple Jump Events in Regularly Varying Random Walks and Compound Poisson Processes

    Full text link
    We propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges on the heavy-tailed sample path large deviations result recently established in Rhee, Blanchet, and Zwart (2016). The new estimators are straightforward to implement and can be used to systematically evaluate the probability of a wide range of rare events with bounded relative error. They are "universal" in the sense that a single importance sampling scheme applies to a very general class of rare events that arise in heavy-tailed systems. In particular, our estimators can deal with rare events that are caused by multiple big jumps (therefore, beyond the usual principle of a single big jump) as well as multidimensional processes such as the buffer content process of a queueing network. We illustrate the versatility of our approach with several applications that arise in the context of mathematical finance, actuarial science, and queueing theory

    Efficient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks

    Full text link
    The contribution of this paper is to introduce change of measure based techniques for the rare-event analysis of heavy-tailed stochastic processes. Our changes-of-measure are parameterized by a family of distributions admitting a mixture form. We exploit our methodology to achieve two types of results. First, we construct Monte Carlo estimators that are strongly efficient (i.e. have bounded relative mean squared error as the event of interest becomes rare). These estimators are used to estimate both rare-event probabilities of interest and associated conditional expectations. We emphasize that our techniques allow us to control the expected termination time of the Monte Carlo algorithm even if the conditional expected stopping time (under the original distribution) given the event of interest is infinity -- a situation that sometimes occurs in heavy-tailed settings. Second, the mixture family serves as a good approximation (in total variation) of the conditional distribution of the whole process given the rare event of interest. The convenient form of the mixture family allows us to obtain, as a corollary, functional conditional central limit theorems that extend classical results in the literature. We illustrate our methodology in the context of the ruin probability P(supnSn>b)P(\sup_n S_n >b), where SnS_n is a random walk with heavy-tailed increments that have negative drift. Our techniques are based on the use of Lyapunov inequalities for variance control and termination time. The conditional limit theorems combine the application of Lyapunov bounds with coupling arguments

    Efficient rare-event simulation for multiple jump events in regularly varying random walks and compound Poisson processes

    Get PDF
    We propose a class of strongly efficient rare-event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges on a recently established heavy-tailed sample-path large deviations result. The new estimators are straightforward to implement and can be used to systematically evaluate the probability of a wide range of rare events with bounded relative error. They are “universal” in the sense that a single importance sampling scheme applies to a very general class of rare events that arise in heavy-tailed systems. In particular, our estimators can deal with rare events that are caused by multiple big jumps (therefore, beyond the usual principle of a single big jump) as well as multidimensional processes such as the buffer content process of a queueing network. We illustrate the versatility of our approach with several applications that arise in the context of mathematical finance, actuarial science, and queueing theory

    Efficient simulation of ruin probabilities when claims are mixtures of heavy and light tails

    Get PDF
    We consider the classical Cram\'er-Lundberg risk model with claim sizes that are mixtures of phase-type and subexponential variables. Exploiting a specific geometric compound representation, we propose control variate techniques to efficiently simulate the ruin probability in this situation. The resulting estimators perform well for both small and large initial capital. We quantify the variance reduction as well as the efficiency gain of our method over another fast standard technique based on the classical Pollaczek-Khinchine formula. We provide a numerical example to illustrate the performance, and show that for more time-consuming conditional Monte Carlo techniques, the new series representation also does not compare unfavorably to the one based on the Pollaczek- Khinchine formula.Comment: 18 pages, 8 figure

    Linear Stochastic Fluid Networks: Rare-Event Simulation and Markov Modulation

    Get PDF
    We consider a linear stochastic fluid network under Markov modulation, with a focus on the probability that the joint storage level attains a value in a rare set at a given point in time. The main objective is to develop efficient importance sampling algorithms with provable performance guarantees. For linear stochastic fluid networks without modulation, we prove that the number of runs needed (so as to obtain an estimate with a given precision) increases polynomially (whereas the probability under consideration decays essentially exponentially); for networks operating in the slow modulation regime, our algorithm is asymptotically efficient. Our techniques are in the tradition of the rare-event simulation procedures that were developed for the sample-mean of i.i.d. one-dimensional light-tailed random variables, and intensively use the idea of exponential twisting. In passing, we also point out how to set up a recursion to evaluate the (transient and stationary) moments of the joint storage level in Markov-modulated linear stochastic fluid networks

    The Mathematics and Statistics of Quantitative Risk Management

    Get PDF
    [no abstract available
    corecore