20 research outputs found

    A review of techniques for parameter sensitivity analysis of environmental models

    Full text link
    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a ‘sensitivity analysis’. A comprehensive review is presented of more than a dozen sensitivity analysis methods. This review is intended for those not intimately familiar with statistics or the techniques utilized for sensitivity analysis of computer models. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42691/1/10661_2004_Article_BF00547132.pd

    Best-Estimate Model Calibration and Prediction through Experimental Data Assimilation—II: Application to a Blow-down Benchmark Experiment

    No full text
    This work presents a paradigm application of a new methodology for simultaneously calibrating (adjusting) model parameters and responses, through assimilation of experimental data, to the benchmark transient thermal-hydraulic experiment IC1, performed at London's Imperial College. Following the description of the experimental setup, the corresponding mathematical model is developed and solved numerically. The sensitivities of typically important responses (e.g., temperatures, pressures) to model parameters are computed by applying both the forward and the adjoint sensitivity analysis procedures. These sensitivities not only identify the most important model parameters but also propagate, within the data assimilation procedure, parameter uncertainties for obtaining predictive best-estimate quantities, with reduced best-estimate uncertainties (i.e., “smaller” values for the variance-covariance matrices). This assimilation procedure also provides a quantitative indication of the degree of agreement between computations and experiments. In particular, the paradigm application presented in this work indicates the path for validating and calibrating thermal-hydraulic computational models used for reactor safety analyses. The concluding remarks highlight several important open issues, the resolution of which would significantly advance the area of predictive best-estimate modeling, while opening new avenues for applications in nuclear reactor engineering and safety

    Deterministic Sensitivity and Uncertainty Methodology for Best Estimate System Codes applied in Nuclear Technology

    No full text
    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter characterizing the system, based on a fully mathematical approach where the experimental evidences play the major role and b) to calculate an improved estimate of the computed response and relative improved (i.e. reduced) uncertainty
    corecore