39 research outputs found

    A Bootstrapped Modularised method of Global Sensitivity Analysis applied to Probabilistic Seismic Hazard Assessment

    Get PDF
    Probabilistic Seismic Hazard Assessment (PSHA) evaluates the probability of exceedance of a given earthquake intensity threshold like the Peak Ground Acceleration, at a target site for a given exposure time. The stochasticity of the occurrence of seismic events is modelled by stochastic processes and the propagation of the earthquake wave in the soil is typically evaluated by empirical relationships called Ground Motion Prediction Equations. The large uncertainty affecting PSHA is quantified by defining alternative model settings and/or model parametri-zations. In this work, we propose a novel Bootstrapped Modularised Global Sensitivity Analysis (BMGSA) method for identifying the model parameters most important for the uncertainty in PSHA, that consists in generating alternative artificial datasets by bootstrapping an available input-output dataset and aggregating the individual rankings obtained with the modularized method from each of those.The proposed method is tested on a realistic PSHA case study in Italy. The results are compared with a standard variance-based Global Sensitivity Analysis (GSA) method of literature. The novelty and strength of the proposed BMGSA method are both in the fact that its application only requires input-output data and not the use of a PSHA code for repeated calculations

    Variance-based reliability sensitivity with dependent inputs using failure samples

    Full text link
    Reliability sensitivity analysis is concerned with measuring the influence of a system's uncertain input parameters on its probability of failure. Statistically dependent inputs present a challenge in both computing and interpreting these sensitivity indices; such dependencies require discerning between variable interactions produced by the probabilistic model describing the system inputs and the computational model describing the system itself. To accomplish such a separation of effects in the context of reliability sensitivity analysis we extend on an idea originally proposed by Mara and Tarantola (2012) for model outputs unrelated to rare events. We compute the independent (influence via computational model) and full (influence via both computational and probabilistic model) contributions of all inputs to the variance of the indicator function of the rare event. We compute this full set of variance-based sensitivity indices of the rare event indicator using a single set of failure samples. This is possible by considering dd different hierarchically structured isoprobabilistic transformations of this set of failure samples from the original dd-dimensional space of dependent inputs to standard-normal space. The approach facilitates computing the full set of variance-based reliability sensitivity indices with a single set of failure samples obtained as the byproduct of a single run of a sample-based rare event estimation method. That is, no additional evaluations of the computational model are required. We demonstrate the approach on a test function and two engineering problems

    Probability-space Surrogate Modeling for Sensitivity Analysis and Optimization

    Get PDF
    This paper presents probability-space surrogate modeling approaches for global sensitivity analysis (GSA) and optimization under uncertainty. A probability model is learned first based on the available data to capture the nonlinear probabilistic relationships between the quantity of interest and input variables as well as among different input variables. Based on the learned probability model, approaches are then developed for design optimization under uncertainty and fast computation of the first order and total-effect sensitivity indices. This framework is applicable to not only GSA with correlated random variables and for sets of input variables, but also coupled multidisciplinary systems design under uncertainty with multiple objectives. The implementation of the proposed framework is investigated through two probability models, namely Gaussian copula model and Gaussian mixture model. One numerical example and one aircraft wing design problem demonstrate the effectiveness of the proposed method for GSA and multidisciplinary design under uncertainty

    Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

    Get PDF
    The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs

    Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    Get PDF
    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results

    Uncertainties in shoreline projections to 2100 at Truc Vert Beach (France): Role of sea‐level rise and equilibrium model assumptions

    Get PDF
    Sandy shorelines morphodynamics responds to a myriad of processes interacting at different spatial and temporal scales, making shoreline predictions challenging. Shoreline modeling inherits uncertainties from the primary driver boundary conditions (e.g., sea-level rise and wave forcing) as well as uncertainties related to model assumptions and/or misspecifications of the physics. This study presents an analysis of the uncertainties associated with future shoreline evolution at the cross-shore transport dominated sandy beach of Truc Vert (France) over the 21st century. We explicitly resolve wave-driven shoreline change using two different equilibrium modeling approaches to provide new insight into the contributions of sea-level rise, and free model parameters uncertainties on future shoreline change in the frame of climate change. Based on a Global Sensitivity Analysis, shoreline response during the first half of the century is found to be mainly sensitive to the equilibrium model parameters, with the influence of sea-level rise emerging in the second half of the century (∼2050 or later), under several simulated scenarios. The results reveal that the seasonal and interannual variability of the predicted shoreline position is sensitive to the choice of the wave-driven equilibrium-based model. Finally, we discuss the importance of the chronology of wave events in future shoreline change, calling for more continuous wave projection time series to further address uncertainties in future wave conditions. Our contribution demonstrates that unmitigated climate change can result in shoreline retreat of several tens of meters by 2100, even for sectors that have been stable or slightly accreting over the last century
    corecore