6 research outputs found

    How to estimate a cumulative process’s rate-function

    Get PDF
    Consider two sequences of bounded random variables, a value and a timing process, that satisfy the large deviation principle (LDP) with rate-function J(·,·) and whose cumulative process satisfies the LDP with rate-function I(·). Under mixing conditions, an LDP for estimates of I constructed by transforming an estimate of J is proved. For the case of cumulative renewal processes it is demonstrated that this approach is favorable to a more direct method as it ensures the laws of the estimates converge weakly to a Dirac measure at I

    How to estimate a cumulative process’s rate-function

    Get PDF
    Consider two sequences of bounded random variables, a value and a timing process, that satisfy the large deviation principle (LDP) with rate-function J(·,·) and whose cumulative process satisfies the LDP with rate-function I(·). Under mixing conditions, an LDP for estimates of I constructed by transforming an estimate of J is proved. For the case of cumulative renewal processes it is demonstrated that this approach is favorable to a more direct method as it ensures the laws of the estimates converge weakly to a Dirac measure at I

    Large Deviations and Transient Multiplexing at a Buffered Resource

    Get PDF
    In this paper we discuss asymptotics associated with a large number of sources using a resource in a compact time interval. A large deviations condition is placed on the sum of the vectors that describe the stochastic behaviour of the sources and large deviations results deduced about the probability of exhaustion of the resource. This approach allows us to consider sources which are highly non-stationary in time. The examples in mind are a single server queue and a form of the Cramer-Lundburg model from risk theory. Connection is made with past work on stability of queues and effective bandwidths. A number of examples are presented to illustrate the strengths of this approach

    Most likely paths to error when estimating the mean of a reflected random walk

    Get PDF
    It is known that simulation of the mean position of a Reflected Random Walk (RRW) {Wn}\{W_n\} exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed above the mean the rate function is null. This paper takes a deeper look at this phenomenon. Conditional on a large sample mean, a complete sample path LDP analysis is obtained. Let II denote the rate function for the one dimensional increment process. If II is coercive, then given a large simulated mean position, under general conditions our results imply that the most likely asymptotic behavior, ψ\psi^*, of the paths n1Wtnn^{-1} W_{\lfloor tn\rfloor} is to be zero apart from on an interval [T0,T1][0,1][T_0,T_1]\subset[0,1] and to satisfy the functional equation \begin{align*} \nabla I\left(\ddt\psi^*(t)\right)=\lambda^*(T_1-t) \quad \text{whenever } \psi(t)\neq 0. \end{align*} If II is non-coercive, a similar, but slightly more involved, result holds. These results prove, in broad generality, that Monte Carlo estimates of the steady-state mean position of a RRW have a high likelihood of over-estimation. This has serious implications for the performance evaluation of queueing systems by simulation techniques where steady state expected queue-length and waiting time are key performance metrics. The results show that na\"ive estimates of these quantities from simulation are highly likely to be conservative.Comment: 23 pages, 8 figure
    corecore