11,693 research outputs found

    Precursors of extreme increments

    Get PDF
    We investigate precursors and predictability of extreme increments in a time series. The events we are focusing on consist in large increments within successive time steps. We are especially interested in understanding how the quality of the predictions depends on the strategy to choose precursors, on the size of the event and on the correlation strength. We study the prediction of extreme increments analytically in an AR(1) process, and numerically in wind speed recordings and long-range correlated ARMA data. We evaluate the success of predictions via receiver operator characteristics (ROC-curves). Furthermore, we observe an increase of the quality of predictions with increasing event size and with decreasing correlation in all examples. Both effects can be understood by using the likelihood ratio as a summary index for smooth ROC-curves

    Importance sampling the union of rare events with an application to power systems analysis

    Full text link
    We consider importance sampling to estimate the probability μ\mu of a union of JJ rare events HjH_j defined by a random variable x\boldsymbol{x}. The sampler we study has been used in spatial statistics, genomics and combinatorics going back at least to Karp and Luby (1983). It works by sampling one event at random, then sampling x\boldsymbol{x} conditionally on that event happening and it constructs an unbiased estimate of μ\mu by multiplying an inverse moment of the number of occuring events by the union bound. We prove some variance bounds for this sampler. For a sample size of nn, it has a variance no larger than μ(μˉμ)/n\mu(\bar\mu-\mu)/n where μˉ\bar\mu is the union bound. It also has a coefficient of variation no larger than (J+J12)/(4n)\sqrt{(J+J^{-1}-2)/(4n)} regardless of the overlap pattern among the JJ events. Our motivating problem comes from power system reliability, where the phase differences between connected nodes have a joint Gaussian distribution and the JJ rare events arise from unacceptably large phase differences. In the grid reliability problems even some events defined by 57725772 constraints in 326326 dimensions, with probability below 102210^{-22}, are estimated with a coefficient of variation of about 0.00240.0024 with only n=10,000n=10{,}000 sample values

    Zero-Crossing Statistics for Non-Markovian Time Series

    Full text link
    In applications spaning from image analysis and speech recognition, to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging. And therefore, few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero-crossings in a fixed time interval of a zero-mean Gaussian stationary processes. In this study we use the so-called Independent Interval Approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agrees well with simulations for the non-Markovian autoregressive model

    Calculation of Generalized Polynomial-Chaos Basis Functions and Gauss Quadrature Rules in Hierarchical Uncertainty Quantification

    Get PDF
    Stochastic spectral methods are efficient techniques for uncertainty quantification. Recently they have shown excellent performance in the statistical analysis of integrated circuits. In stochastic spectral methods, one needs to determine a set of orthonormal polynomials and a proper numerical quadrature rule. The former are used as the basis functions in a generalized polynomial chaos expansion. The latter is used to compute the integrals involved in stochastic spectral methods. Obtaining such information requires knowing the density function of the random input {\it a-priori}. However, individual system components are often described by surrogate models rather than density functions. In order to apply stochastic spectral methods in hierarchical uncertainty quantification, we first propose to construct physically consistent closed-form density functions by two monotone interpolation schemes. Then, by exploiting the special forms of the obtained density functions, we determine the generalized polynomial-chaos basis functions and the Gauss quadrature rules that are required by a stochastic spectral simulator. The effectiveness of our proposed algorithm is verified by both synthetic and practical circuit examples.Comment: Published by IEEE Trans CAD in May 201

    Counting function fluctuations and extreme value threshold in multifractal patterns: the case study of an ideal 1/f1/f noise

    Full text link
    To understand the sample-to-sample fluctuations in disorder-generated multifractal patterns we investigate analytically as well as numerically the statistics of high values of the simplest model - the ideal periodic 1/f1/f Gaussian noise. By employing the thermodynamic formalism we predict the characteristic scale and the precise scaling form of the distribution of number of points above a given level. We demonstrate that the powerlaw forward tail of the probability density, with exponent controlled by the level, results in an important difference between the mean and the typical values of the counting function. This can be further used to determine the typical threshold xmx_m of extreme values in the pattern which turns out to be given by xm(typ)=2clnlnM/lnMx_m^{(typ)}=2-c\ln{\ln{M}}/\ln{M} with c=3/2c=3/2. Such observation provides a rather compelling explanation of the mechanism behind universality of cc. Revealed mechanisms are conjectured to retain their qualitative validity for a broad class of disorder-generated multifractal fields. In particular, we predict that the typical value of the maximum pmaxp_{max} of intensity is to be given by lnpmax=αlnM+32f(α)lnlnM+O(1)-\ln{p_{max}} = \alpha_{-}\ln{M} + \frac{3}{2f'(\alpha_{-})}\ln{\ln{M}} + O(1), where f(α)f(\alpha) is the corresponding singularity spectrum vanishing at α=α>0\alpha=\alpha_{-}>0. For the 1/f1/f noise we also derive exact as well as well-controlled approximate formulas for the mean and the variance of the counting function without recourse to the thermodynamic formalism.Comment: 28 pages; 7 figures, published version with a few misprints corrected, editing done and references adde

    Failure Probability Estimation and Detection of Failure Surfaces via Adaptive Sequential Decomposition of the Design Domain

    Full text link
    We propose an algorithm for an optimal adaptive selection of points from the design domain of input random variables that are needed for an accurate estimation of failure probability and the determination of the boundary between safe and failure domains. The method is particularly useful when each evaluation of the performance function g(x) is very expensive and the function can be characterized as either highly nonlinear, noisy, or even discrete-state (e.g., binary). In such cases, only a limited number of calls is feasible, and gradients of g(x) cannot be used. The input design domain is progressively segmented by expanding and adaptively refining mesh-like lock-free geometrical structure. The proposed triangulation-based approach effectively combines the features of simulation and approximation methods. The algorithm performs two independent tasks: (i) the estimation of probabilities through an ingenious combination of deterministic cubature rules and the application of the divergence theorem and (ii) the sequential extension of the experimental design with new points. The sequential selection of points from the design domain for future evaluation of g(x) is carried out through a new learning function, which maximizes instantaneous information gain in terms of the probability classification that corresponds to the local region. The extension may be halted at any time, e.g., when sufficiently accurate estimations are obtained. Due to the use of the exact geometric representation in the input domain, the algorithm is most effective for problems of a low dimension, not exceeding eight. The method can handle random vectors with correlated non-Gaussian marginals. The estimation accuracy can be improved by employing a smooth surrogate model. Finally, we define new factors of global sensitivity to failure based on the entire failure surface weighted by the density of the input random vector.Comment: 42 pages, 24 figure

    Lost in translation: Toward a formal model of multilevel, multiscale medicine

    Get PDF
    For a broad spectrum of low level cognitive regulatory and other biological phenomena, isolation from signal crosstalk between them requires more metabolic free energy than permitting correlation. This allows an evolutionary exaptation leading to dynamic global broadcasts of interacting physiological processes at multiple scales. The argument is similar to the well-studied exaptation of noise to trigger stochastic resonance amplification in physiological subsystems. Not only is the living state characterized by cognition at every scale and level of organization, but by multiple, shifting, tunable, cooperative larger scale broadcasts that link selected subsets of functional modules to address problems. This multilevel dynamical viewpoint has implications for initiatives in translational medicine that have followed the implosive collapse of pharmaceutical industry 'magic bullet' research. In short, failure to respond to the inherently multilevel, multiscale nature of human pathophysiology will doom translational medicine to a similar implosion
    corecore