60 research outputs found

    Correlated uncertainty arithmetic with application to fusion neutronics

    Get PDF
    his thesis advances the idea of automatic and rigorous uncertainty propagation for computational science. The aim is to replace the deterministic arithmetic and logical operations composing a function or a computer program with their uncertain equivalents. In this thesis, uncertain computer variables are labelled uncertain numbers, which may be probability distributions, intervals, probability boxes, and possibility distributions. The individual models of uncertainty are surveyed in the context of imprecise probability theory, and their individual arithmetic described and developed, with new results presented in each arithmetic. The presented arithmetic framework allows random variables to be imprecisely characterised or partially defined. It is a common situation that input random variables are unknown or that only certain characteristics of the inputs are known. How uncertain numbers can be rigorously represented by a finite numerical discretisation is described. Further, it is shown how arithmetic operations are computed by numerical convolution, accounting for both the error from the input's discretisation and from the numerical integration, yielding guaranteed bounds on computed uncertain numbers. One of the central topics of this thesis is stochastic dependency. Considering complex dependencies amongst uncertain numbers is necessary, as it plays a key role in operations. An arithmetic operation between two uncertain numbers is a function not only of the input numbers, but also how they are correlated. This is often more important than the marginal information. In the presented arithmetic, dependencies between uncertain numbers may also be partially defined or missing entirely. A major proposition of this thesis are methods to propagate dependence information through functions alongside marginal information. The long-term goal is to solve probabilistic problems with partial knowledge about marginal distributions and dependencies using algorithms which were written deterministically. The developed arithmetic frameworks can be used individually, or may be combined into a larger uncertainty computing framework. We present an application of the developed method to a radiation transport algorithm for nuclear fusion neutronics problems

    One Way or Another: Cortical Language Areas Flexibly Adapt Processing Strategies to Perceptual And Contextual Properties of Speech

    Get PDF
    Published:07 April 2021Cortical circuits rely on the temporal regularities of speech to optimize signal parsing for sound-to-meaning mapping. Bottom-up speech analysis is accelerated by top–down predictions about upcoming words. In everyday communications, however, listeners are regularly presented with challenging input—fluctuations of speech rate or semantic content. In this study, we asked how reducing speech temporal regularity affects its processing—parsing, phonological analysis, and ability to generate context-based predictions. To ensure that spoken sentences were natural and approximated semantic constraints of spontaneous speech we built a neural network to select stimuli from large corpora. We analyzed brain activity recorded with magnetoencephalography during sentence listening using evoked responses, speech-to-brain synchronization and representational similarity analysis. For normal speech theta band (6.5–8 Hz) speech-to-brain synchronization was increased and the left fronto-temporal areas generated stronger contextual predictions. The reverse was true for temporally irregular speech—weaker theta synchronization and reduced top–down effects. Interestingly, delta-band (0.5 Hz) speech tracking was greater when contextual/semantic predictions were lower or if speech was temporally jittered. We conclude that speech temporal regularity is relevant for (theta) syllabic tracking and robust semantic predictions while the joint support of temporal and contextual predictability reduces word and phrase-level cortical tracking (delta).The European Union’s Horizon 2020 research and innovation programme (under the Marie Sklodowska-Curie grant agreement No 798971 awarded to A.K.G.); the Spanish Ministry of Science, Innovation and Universities (grant RTI2018-096311-B-I00 to N.M.); the Agencia Estatal de Investigación (AEI), the Fondo Europeo de Desarrollo Regional (FEDER); the Basque Government (through the BERC 2018-2021 program), the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation (SEV-2015-0490), DeepText project (KK-2020/00088) and Ixa excellence research group (IT1343-19). the UPV/EHU (a postdoctoral grant ESPDOC18/101 to A.B.); the NVIDIA Corporation (to A.B. with the donation of a Titan V GPU used for this research)

    Bounding Failure Probability with the SIVIA Algorithm

    Get PDF
    The accuracy of Monte Carlo simulation methods depends on the computational effort invested in reducing the estimator variance. Typically reducing such variance requires invoking Monte Carlo with as many samples as one can afford. When the system is complex and the failure event is rare, it can be challenging to establish the correctness of the failure probability estimate. To combat this verification problem, we present an adaptation of the SIVIA algorithm (Set Inversion Via Interval Analysis) that computes rigorous bounds on the failure probability of rare events. With this method, the nonlinearity of the system and the magnitude of the failure event no longer constitute a limitation. This method can therefore be used for verification, when it is of interest to know the rigorous bounds of the very small target failure probability of complex systems, for example in benchmark problems. The method is rigorous i.e. inclusive and outside-in, so the more computational effort is invested the tighter the bounds. Because full separation is exercised between the engineering and the probability problem, the input uncertainty model can be changed without a re-evaluation of the physical function which opens avenues towards computing rigorous imprecise failure probability. For example, the reliability could be formulated without making dependency or distributional statements

    Verified bounds on the imprecise failure probability with the SIVIA algorithm

    Get PDF
    In this work, we explore the use of SIVIA (Set Inversion Via Interval Analysis) on failure probability problems formulated with imprecision. Because of the imprecision the integration over the rigorous sub-paving can no longer be done only using the antiderivative of the joint or copula density, a.k.a. h-volume, because of sub-additivity. Under random-set independence, or precise copulas and on small problems (≤ variables), the imprecise failure probability can be obtained counting the intersections with the sub-pavings of all focal elements in the space product. The joint focal elements that are fully contained in the failure sub-paving correspond to the belief---failure probability lower bound. On larger problems, the space product is no longer possible; we replace it by random slicing. The approximation introduced by the random slicing is controlled by a given level of confidence, which typically decreases the more slices are evaluated

    Bounding failure probability with the SIVIA algorithm

    Get PDF
    The accuracy of Monte Carlo simulation methods depends on the computational effort invested in reducing the estimator variance. Typically reducing such variance requires invoking Monte Carlo with as many samples as one can afford. When the system is complex and the failure event is rare, it can be challenging to establish the correctness of the failure probability estimate. To combat this verification problem, we present an adaptation of the SIVIA algorithm (Set Inversion Via Interval Analysis) that computes rigorous bounds on the failure probability of rare events. With this method, the nonlinearity of the system and the magnitude of the failure event no longer constitute a limitation. This method can therefore be used for verification, when it is of interest to know the rigorous bounds of the very small target failure probability of complex systems, for example in benchmark problems. The method is rigorous i.e. inclusive and outside-in, so the more computational effort is invested the tighter the bounds. Because full separation is exercised between the engineering and the probability problem, the input uncertainty model can be changed without a re-evaluation of the physical function which opens avenues towards computing rigorous imprecise failure probability. For example, the reliability could be formulated without making dependency or distributional statements

    Why the 1-Wasserstein distance is the area between the two marginal CDFs

    Get PDF
    We elucidate why the 1-Wasserstein distance W1 coincides with the area between the two marginal cumulative distribution functions (CDFs). We first describe the Wasserstein distance in terms of copulas, and then show that W1 with the Euclidean distance is attained with the M copula. Two random variables whose dependence is given by the M copula manifest perfect (positive) dependence. If we express the random variables in terms of their CDFs, it is intuitive to see that the distance between two such random variables coincides with the area between the two CDFs

    Bounding failure probability with the SIVIA algorithm

    Get PDF
    The accuracy of Monte Carlo simulation methods depends on the computational effort invested in reducing the estimator variance. Typically reducing such variance requires invoking Monte Carlo with as many samples as one can afford. When the system is complex and the failure event is rare, it can be challenging to establish the correctness of the failure probability estimate. To combat this verification problem, we present an adaptation of the SIVIA algorithm (Set Inversion Via Interval Analysis) that computes rigorous bounds on the failure probability of rare events. With this method, the nonlinearity of the system and the magnitude of the failure event no longer constitute a limitation. This method can therefore be used for verification, when it is of interest to know the rigorous bounds of the very small target failure probability of complex systems, for example in benchmark problems. The method is rigorous i.e. inclusive and outside-in, so the more computational effort is invested the tighter the bounds. Because full separation is exercised between the engineering and the probability problem, the input uncertainty model can be changed without a re-evaluation of the physical function which opens avenues towards computing rigorous imprecise failure probability. For example, the reliability could be formulated without making dependency or distributional statements
    • …
    corecore