90,928 research outputs found

    Importance sampling the union of rare events with an application to power systems analysis

    Full text link
    We consider importance sampling to estimate the probability μ\mu of a union of JJ rare events HjH_j defined by a random variable x\boldsymbol{x}. The sampler we study has been used in spatial statistics, genomics and combinatorics going back at least to Karp and Luby (1983). It works by sampling one event at random, then sampling x\boldsymbol{x} conditionally on that event happening and it constructs an unbiased estimate of μ\mu by multiplying an inverse moment of the number of occuring events by the union bound. We prove some variance bounds for this sampler. For a sample size of nn, it has a variance no larger than μ(μˉ−μ)/n\mu(\bar\mu-\mu)/n where μˉ\bar\mu is the union bound. It also has a coefficient of variation no larger than (J+J−1−2)/(4n)\sqrt{(J+J^{-1}-2)/(4n)} regardless of the overlap pattern among the JJ events. Our motivating problem comes from power system reliability, where the phase differences between connected nodes have a joint Gaussian distribution and the JJ rare events arise from unacceptably large phase differences. In the grid reliability problems even some events defined by 57725772 constraints in 326326 dimensions, with probability below 10−2210^{-22}, are estimated with a coefficient of variation of about 0.00240.0024 with only n=10,000n=10{,}000 sample values

    Robust estimation of risks from small samples

    Get PDF
    Data-driven risk analysis involves the inference of probability distributions from measured or simulated data. In the case of a highly reliable system, such as the electricity grid, the amount of relevant data is often exceedingly limited, but the impact of estimation errors may be very large. This paper presents a robust nonparametric Bayesian method to infer possible underlying distributions. The method obtains rigorous error bounds even for small samples taken from ill-behaved distributions. The approach taken has a natural interpretation in terms of the intervals between ordered observations, where allocation of probability mass across intervals is well-specified, but the location of that mass within each interval is unconstrained. This formulation gives rise to a straightforward computational resampling method: Bayesian Interval Sampling. In a comparison with common alternative approaches, it is shown to satisfy strict error bounds even for ill-behaved distributions.Comment: 13 pages, 3 figures; supplementary information provided. A revised version of this manuscript has been accepted for publication in Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Science

    Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems

    Get PDF
    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models that are analytically tractable are employed to demonstrate the validity of the techniques. As an application to situations that are analytically intractable and numerically intensive, the influence of crosstalk in a wavelength-division multiplexing (WDM) crossconnect is assessed. In order to consider a realistic system model, optimal setting of thresholds in the detector is carried out while estimating error rate performances. Resulting BER estimates indicate that the tolerable crosstalk levels are significantly higher than predicted in the literature. This finding has a strong impact on the design of WDM networks. Power penalties induced by the addition of channels can also be accurately predicted in short run-time

    Importance Sampling and its Optimality for Stochastic Simulation Models

    Full text link
    We consider the problem of estimating an expected outcome from a stochastic simulation model. Our goal is to develop a theoretical framework on importance sampling for such estimation. By investigating the variance of an importance sampling estimator, we propose a two-stage procedure that involves a regression stage and a sampling stage to construct the final estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of the final estimator. We analyze the variance reduction rates and derive oracle properties of both methods. We evaluate the empirical performances of the methods using two numerical examples and a case study on wind turbine reliability evaluation.Comment: 37 pages, 6 figures, 2 tables. Accepted to the Electronic Journal of Statistic

    Large-deviation principles for connectable receivers in wireless networks

    Get PDF
    We study large-deviation principles for a model of wireless networks consisting of Poisson point processes of transmitters and receivers, respectively. To each transmitter we associate a family of connectable receivers whose signal-to-interference-and-noise ratio is larger than a certain connectivity threshold. First, we show a large-deviation principle for the empirical measure of connectable receivers associated with transmitters in large boxes. Second, making use of the observation that the receivers connectable to the origin form a Cox point process, we derive a large-deviation principle for the rescaled process of these receivers as the connection threshold tends to zero. Finally, we show how these results can be used to develop importance-sampling algorithms that substantially reduce the variance for the estimation of probabilities of certain rare events such as users being unable to connectComment: 29 pages, 2 figure
    • …
    corecore