89 research outputs found

    Quantizing rare random maps: application to flooding visualization

    Full text link
    Visualization is an essential operation when assessing the risk of rare events such as coastal or river floodings. The goal is to display a few prototype events that best represent the probability law of the observed phenomenon, a task known as quantization. It becomes a challenge when data is expensive to generate and critical events are scarce, like extreme natural hazard. In the case of floodings, each event relies on an expensive-to-evaluate hydraulic simulator which takes as inputs offshore meteo-oceanic conditions and dyke breach parameters to compute the water level map. In this article, Lloyd's algorithm, which classically serves to quantize data, is adapted to the context of rare and costly-to-observe events. Low probability is treated through importance sampling, while Functional Principal Component Analysis combined with a Gaussian process deal with the costly hydraulic simulations. The calculated prototype maps represent the probability distribution of the flooding events in a minimal expected distance sense, and each is associated to a probability mass. The method is first validated using a 2D analytical model and then applied to a real coastal flooding scenario. The two sources of error, the metamodel and the importance sampling, are evaluated to quantify the precision of the method.Comment: 40 pages, 11 Figures, submitted to Journal of Computational and Graphical Statisti

    FunQuant: A R package to perform quantization in the context of rare events and time-consuming simulations

    Full text link
    Quantization summarizes continuous distributions by calculating a discrete approximation. Among the widely adopted methods for data quantization is Lloyd's algorithm, which partitions the space into Vorono\"i cells, that can be seen as clusters, and constructs a discrete distribution based on their centroids and probabilistic masses. Lloyd's algorithm estimates the optimal centroids in a minimal expected distance sense, but this approach poses significant challenges in scenarios where data evaluation is costly, and relates to rare events. Then, the single cluster associated to no event takes the majority of the probability mass. In this context, a metamodel is required and adapted sampling methods are necessary to increase the precision of the computations on the rare clusters.Comment: 7 pages, 4 figures. Submitted to Journal Of Open Source Softwar

    Constraints from deuterium on the formation of icy bodies in the Jovian system and beyond

    Full text link
    We consider the role of deuterium as a potential marker of location and ambient conditions during the formation of small bodies in our Solar system. We concentrate in particular on the formation of the regular icy satellites of Jupiter and the other giant planets, but include a discussion of the implications for the Trojan asteroids and the irregular satellites. We examine in detail the formation of regular planetary satellites within the paradigm of a circum-Jovian subnebula. Particular attention is paid to the two extreme potential subnebulae - "hot" and "cold". In particular, we show that, for the case of the "hot" subnebula model, the D:H ratio in water ice measured from the regular satellites would be expected to be near-Solar. In contrast, satellites which formed in a "cold" subnebula would be expected to display a D:H ratio that is distinctly over-Solar. We then compare the results obtained with the enrichment regimes which could be expected for other families of icy small bodies in the outer Solar system - the Trojan asteroids and the irregular satellites. In doing so, we demonstrate how measurements by Laplace, the James Webb Space Telescope, HERSCHEL and ALMA will play an important role in determining the true formation locations and mechanisms of these objects.Comment: Accepted and shortly to appear in Planetary and Space Science; 11 pages with 5 figure

    Suppression du régime transitoire initial des simulations Monte-Carlo de criticité

    Get PDF
    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations.Les calculs Monte-Carlo de criticité permettent d'estimer le facteur de multiplication effectif ("k-effectif") d'un système fissile au cours d'itérations simulant la propagation d'une population de neutrons, formant une chaîne de Markov. L'initialisation arbitraire de la population des neutrons simulés peut biaiser fortement l'estimation du k-effectif du système, défini comme la moyenne de la séquence des k-effectifs estimés à chaque itération. Un modèle simplifié de cette séquence de k-effectifs d'étapes est établi à partir du contexte technique d'exploitation industrielle des calculs Monte-Carlo de criticité. Des tests statistiques, inspirés des propriétés du pont brownien, sont construits pour discriminer la stationnarité de la séquence des k-effectifs d'étapes. Le régime transitoire initial éventuellement détecté est alors supprimé pour améliorer l'estimation du k-effectif du système. Les différentes déclinaisons de cette méthodologie sont détaillées puis comparées, d'une part sur un plan d'expériences représentatif des calculs Monte-Carlo de criticité, et d'autre part sur des calculs réels de configurations de criticité. Finalement, les performances observées sur ces applications permettent d'envisager une exploitation pertinente dans les calculs Monte-Carlo de criticité industriels

    Suppression du régime transitoire initial des simulations Monte-Carlo de criticité

    No full text
    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations.Les calculs Monte-Carlo de criticité permettent d'estimer le facteur de multiplication effectif ("k-effectif") d'un système fissile au cours d'itérations simulant la propagation d'une population de neutrons, formant une chaîne de Markov. L'initialisation arbitraire de la population des neutrons simulés peut biaiser fortement l'estimation du k-effectif du système, défini comme la moyenne de la séquence des k-effectifs estimés à chaque itération. Un modèle simplifié de cette séquence de k-effectifs d'étapes est établi à partir du contexte technique d'exploitation industrielle des calculs Monte-Carlo de criticité. Des tests statistiques, inspirés des propriétés du pont brownien, sont construits pour discriminer la stationnarité de la séquence des k-effectifs d'étapes. Le régime transitoire initial éventuellement détecté est alors supprimé pour améliorer l'estimation du k-effectif du système. Les différentes déclinaisons de cette méthodologie sont détaillées puis comparées, d'une part sur un plan d'expériences représentatif des calculs Monte-Carlo de criticité, et d'autre part sur des calculs réels de configurations de criticité. Finalement, les performances observées sur ces applications permettent d'envisager une exploitation pertinente dans les calculs Monte-Carlo de criticité industriels

    Suppression du régime transitoire initial des simulations Monte-Carlo de criticité

    No full text
    Les calculs Monte-Carlo de criticité permettent d estimer le facteur de multiplication effectif ( k-effectif ) d un système fissile au cours d itérations simulant la propagation d une population de neutrons, formant une chaîne de Markov. L'initialisation arbitraire de la population des neutrons simulés peut biaiser fortement l estimation du k-effectif du système, défini comme la moyenne de la séquence des k-effectifs estimés à chaque itération. Un modèle simplifié de cette séquence de k-effectifs d étapes est établi à partir du contexte technique d exploitation industrielle des calculs Monte-Carlo de criticité. Des tests statistiques, inspirés des propriétés du pont brownien, sont construits pour discriminer la stationnarité de la séquence des k-effectifs d étapes. Le régime transitoire initial éventuellement détecté est alors supprimé pour améliorer l estimation du k-effectif du système. Les différentes déclinaisons de cette méthodologie sont détaillées puis comparées, d une part sur un plan d expériences représentatif des calculs Monte-Carlo de criticité, et d autre part sur des calculs réels de configurations de criticité. Finalement, les performances observées sur ces applications permettent d envisager une exploitation pertinente dans les calculs Monte-Carlo de criticité industriels.Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations.ST ETIENNE-ENS des Mines (422182304) / SudocSudocFranceF

    Inversion Algorithm for Civil Flood Defense Optimization: Application to Two-Dimensional Numerical Model of the Garonne River in France.

    No full text
    International audienceThe objective of this study is to investigate the “inversion approach” for flood defense optimization in an inundated area. This new methodology within this engineering field consists in defining a “safety criterion” (for instance, “the water level in a given location must be lower than a given value”) and the combined analysis of all the uncertain controlled parameters (i.e., flood defense geometry, location, etc.) that ensure the safety objective for all the possible combinations of uncontrolled parameters (i.e., the flow hydrograph parameters) representing the natural phenomenon is not exceeded. To estimate this safety set, a metamodeling approach will be used which significantly reduces the number of model evaluations required. This algorithm relies on a kriging surrogate built from a few model evaluations, sequentially enriched with new numerical model evaluations as long as the remaining uncertainty of the entire safety et remains too high. Also known as “Stepwise Uncertainty Reduction,” this algorithmis embedded in the “Funz” engine (https://github.com/Funz) tasked with bridging the numerical model and any design of experiments algorithm.We applied this algorithm to a real two-dimensional numerical model of the Garonne river (France), constructed using the open-source TELEMAC-2D model. We focused our attention mainly on the maximum water depth in a given area (the “safety criterion”) when considering the influence of a simplified flood defense during a flooding event. We consider the two safety control parameters describing the slab and dyke elevations of the flood defense system, to design against the full operating range of the river in terms of possible watershed flooding. For this application case, it appears that less than 200 simulations are needed to properly evaluate the restricted zone of the design parameters (the “safety zone”) where the safety criterion is always met. This provides highly valuable data for full risk-informed management of the area requiring protection

    Automated Suppression of the Initial Transient in Monte Carlo Calculations based on Stationarity Detection using the Brownian Bridge Theory

    No full text
    http://typhoon.jaea.go.jp/icnc2003/Proceeding/paper/5.25_063.pdfThe accuracy of a criticality Monte Carlo (MC) calculation requires the convergence of the k-effective series. Once the convergence is reached, the estimation of the k-effective eigenvalue must exclude the initial transient of the k-effective series. The present paper deals with a post-processing algorithm to suppress the initial transient of a criticality MC calculation, using the Brownian Bridge theory
    • …
    corecore