7 research outputs found

    Global sensitivity analysis of computer models with functional inputs

    Get PDF
    Global sensitivity analysis is used to quantify the influence of uncertain input parameters on the response variability of a numerical model. The common quantitative methods are applicable to computer codes with scalar input variables. This paper aims to illustrate different variance-based sensitivity analysis techniques, based on the so-called Sobol indices, when some input variables are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary meta-modeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked Generalized Linear Models (GLM) or Generalized Additive Models (GAM). The ``mean'' model allows to estimate the sensitivity indices of each scalar input variables, while the ``dispersion'' model allows to derive the total sensitivity index of the functional input variables. The proposed approach is compared to some classical SA methodologies on an analytical function. Lastly, the proposed methodology is applied to a concrete industrial computer code that simulates the nuclear fuel irradiation

    Calculations of Sobol indices for the Gaussian process metamodel

    Get PDF
    Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling

    A GUIDED SIMULATION METHODOLOGY FOR DYNAMIC PROBABILISTIC RISK ASSESSMENT OF COMPLEX SYSTEMS

    Get PDF
    Probabilistic risk assessment (PRA) is a systematic process of examining how engineered systems work to ensure safety. With the growth of the size of the dynamic systems and the complexity of the interactions between hardware, software, and humans, it is extremely difficult to enumerate the risky scenarios by the traditional PRA methods. Over the past 15 years, a host of DPRA methods have been proposed to serve as supplemental tools to traditional PRA to deal with complex dynamic systems. A new dynamic probabilistic risk assessment framework is proposed in this dissertation. In this framework a new exploration strategy is employed. The engineering knowledge of the system is explicitly used to guide the simulation to achieve higher efficiency and accuracy. The engineering knowledge is reflected in the "Planner" which is responsible for generating plans as a high level map to guide the simulation. A scheduler is responsible for guiding the simulation by controlling the timing and occurrence of the random events. During the simulation the possible random events are proposed to the scheduler at branch points. The scheduler decides which events are to be simulated. Scheduler would favor the events with higher values. The value of a proposed event depends on the information gain from exploring that scenario, and the importance factor of the scenario. The information gain is measured by the information entropy, and the importance factor is based on the engineering judgment. The simulation results are recorded and grouped for later studies. The planner may "learn" from the simulation results, and update the plan to guide further simulation. SIMPRA is the software package which implements the new methodology. It provides the users with a friendly interface and a rich DPRA library to aid in the construction of the simulation model. The engineering knowledge can be input into the Planner, which would generate a plan automatically. The scheduler would guide the simulation according to the plan. The simulation generates many accident event sequences and estimates of the end state probabilities

    A risk-based maintenance methodology of industrial systems

    Get PDF
    Maintenance is an essential task that must be carried out in an efficient and effective manner in order to sustain and prolong the physical assets of a company. Maintenance may be defined as any action which has the objective of retaining or restoring an item to a state in which it can perform its required function. Maintenance is therefore a valuable part of most industries today, helping improve productivity and output whilst reducing the costs associated with downtime in addition to eliminating failure of equipment. The goal of maintenance, like all other functions of any manufacturing company, must be a cost effective activity. Consequently, it becomes essential for a company to develop a cost effective maintenance strategy that will achieve this goal. Delay-time analysis is a maintenance modelling technique which can achieve such goals in a manufacturing environment. Delay-time analysis, through the input of certain parameters, is capable of establishing an optimum inspection interval from both a downtime standpoint as well as a cost standpoint. The delay-time analysis concept has been further developed in this thesis in order to establish an environmental model. Alongside the downtime model and cost model, the environmental model gives a measure of the consequence of failure in terms of cost to both a company and to the environment. This environmental model has been applied to a company producing a product which is potentially harmful to both humans and the environment. The use of delay-time analysis to establish a downtime model and cost model relies predominantly on objective historical data which, given the correct types of data, makes model development a powerful and accurate tool. The environmental model, however, relies heavily on subjective data and expert judgement in order to establish the required parameters. In order to overcome the inevitable inaccuracies present in subjective expert judgement, due mainly to individual perception, the environmental model has been further enhanced using fuzzy set modelling. The use of delay-time analysis to develop a model involves establishing several important parameters, one such parameter being that of failure rate (λ). This parameter forms an integral part of a delay-time analysis study but is established in a simplistic manner (i. e. number of failures/time). This parameter is established using historical information calculated using statistical averages. Understanding and identifying the influencing factors responsible for failure will serve to improve the understanding and increase accuracy of failure rate. This thesis examines and develops this parameter with the use of Bayesian network modelling. Bayesian network modelling allows differing influences responsible for failure to be considered in an exact and precise manner. The findings of this research is that a methodology has been successfully developed, using delay-time analysis modelling, in order to aid decision making in a manufacturing environment. Further improvement of the delay-time analysis model was brought about with the use of fuzzy set modelling and Bayesian network modelling. The integration of both the fuzzy set model and Bayesian network model into the delay-time model has been conducted. A direct comparison has being drawn between the original delay-time model and the enhanced delay-time model in order to highlight the improvements of the integrated model

    Uncertainty propagation through large nonlinear models.

    Get PDF
    Uncertainty analysis in computer models has seen a rise in interest in recent years as a result of the increased complexity of (and dependence on) computer models in the design process. A major problem however, is that the computational cost of propagating uncertainty through large nonlinear models can be prohibitive using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression, hitherto relatively unknown in engineering. The GP is used to perform uncertainty and sensitivity analysis on nonlinear finite element models of a human heart valve and a novel airship design. Aside from results specific to these models, it is evident that a limitation of the GP is that non-smooth model responses cannot be accurately represented. Consequently, an extension to the GP is investigated, which uses a classification and regression tree to partition the input space, such that non-smooth responses, including bifurcations, can be modelled at boundaries. This new emulator is applied to a simple nonlinear problem, then a bifurcating finite element model. The method is found to be successful, as well as actually reducing computational cost, although it is noted that bifurcations that are not axis-aligned cannot realistically be dealt with

    Variance decomposition-based sensitivity analysis via neural networks

    No full text
    none4M. MARSEGUERRA; R. MASINI; E. ZIO; G. COJAZZIMarseguerra, Marzio; R., Masini; Zio, Enrico; G., Cojazz

    Variance Decomposition-Based Sensitivity Analysis via Neural Networks.

    No full text
    Abstract not availableJRC.G-Institute for the Protection and the Security of the Citizen (Ispra
    corecore