Conservative Confidence Intervals of Importance Sampling Estimates

Abstract

Confidence intervals (CI) are used to gauge the accuracy of bit error rate (BER) estimates produced by Monte Carlo (MC) simulations. This work attempts to objectively evaluate the performance of Important Sampling (IS) simulations by applying the same statistical analysis tool. While it is not possible to evaluate the minimum size CI for arbitrary IS estimates, it is possible to over-bound the interval using a technique called conservative confidence interval (CCI) estimation. This bounding procedure is applied to a simple IS biasing technique. While the IS estimate may be superior to the MC estimate, the CCI fails to support this claim. Since there has been little previous work published in the area of CI of IS estimates, this document is offered as a starting point. Hopefully others will be able to develop tighter bounds for the CI of IS estimate

    Similar works