165,259 research outputs found

    Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment

    Get PDF
    Copyright © 2013 Elsevier. NOTICE: this is the author’s version of a work that was accepted for publication in Water Research. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Water Research Vol. 47 (2013), DOI: 10.1016/j.watres.2013.05.021This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality

    Influence of parametric uncertainties and their interactions on small-signal stability : a case example of parallel-connected active loads in a DC microgrid

    Get PDF
    Classical stability analysis techniques based on nominal models do not consider the uncertainty of system parameters, their interactions, and nonlinearity, which are important characteristics of practical highly coupled microgrids. In this work, variance-based sensitivity analysis is used to identify parameter combinations that have a significant impact on the small-signal stability of a microgrid featuring two parallel active loads. The analysis indicates that the effectiveness of source-side damping is reduced when resonant frequencies of load input filters become matched. Further results using derivative-based sensitivity analysis reveal that source-side resistance can exhibit drastically different effects on the stability if load input filter resonant frequencies are matched with respect to the case when they are well separated. These behaviours are verified using time-domain switching models

    Distributionally robust and generalizable inference

    Full text link
    We discuss recently developed methods that quantify the stability and generalizability of statistical findings under distributional changes. In many practical problems, the data is not drawn i.i.d. from the target population. For example, unobserved sampling bias, batch effects, or unknown associations might inflate the variance compared to i.i.d. sampling. For reliable statistical inference, it is thus necessary to account for these types of variation. We discuss and review two methods that allow quantifying distribution stability based on a single dataset. The first method computes the sensitivity of a parameter under worst-case distributional perturbations to understand which types of shift pose a threat to external validity. The second method treats distributional shifts as random which allows assessing average robustness (instead of worst-case). Based on a stability analysis of multiple estimators on a single dataset, it integrates both sampling and distributional uncertainty into a single confidence interval

    Sensitivity and uncertainty analysis of two human atrial cardiac cell models using gaussian process emulators

    Get PDF
    Biophysically detailed cardiac cell models reconstruct the action potential and calcium dynamics of cardiac myocytes. They aim to capture the biophysics of current flow through ion channels, pumps, and exchangers in the cell membrane, and are highly detailed. However, the relationship between model parameters and model outputs is difficult to establish because the models are both complex and non-linear. The consequences of uncertainty and variability in model parameters are therefore difficult to determine without undertaking large numbers of model evaluations. The aim of the present study was to demonstrate how sensitivity and uncertainty analysis using Gaussian process emulators can be used for a systematic and quantitive analysis of biophysically detailed cardiac cell models. We selected the Courtemanche and Maleckar models of the human atrial action potential for analysis because these models describe a similar set of currents, with different formulations. In our approach Gaussian processes emulate the main features of the action potential and calcium transient. The emulators were trained with a set of design data comprising samples from parameter space and corresponding model outputs, initially obtained from 300 model evaluations. Variance based sensitivity indices were calculated using the emulators, and first order and total effect indices were calculated for each combination of parameter and output. The differences between the first order and total effect indices indicated that the effect of interactions between parameters was small. A second set of emulators were then trained using a new set of design data with a subset of the model parameters with a sensitivity index of more than 0.1 (10%). This second stage analysis enabled comparison of mechanisms in the two models. The second stage sensitivity indices enabled the relationship between the L-type Ca2+ current and the action potential plateau to be quantified in each model. Our quantitative analysis predicted that changes in maximum conductance of the ultra-rapid K+ channel IKur would have opposite effects on action potential duration in the two models, and this prediction was confirmed by additional simulations. This study has demonstrated that Gaussian process emulators are an effective tool for sensitivity and uncertainty analysis of biophysically detailed cardiac cell models

    Characterization of process-oriented hydrologic model behavior with temporal sensitivity analysis for flash floods in Mediterranean catchments

    Get PDF
    This paper presents a detailed analysis of 10 flash flood events in the Mediterranean region using the distributed hydrological model MARINE. Characterizing catchment response during flash flood events may provide new and valuable insight into the dynamics involved for extreme catchment response and their dependency on physiographic properties and flood severity. The main objective of this study is to analyze flash-flood-dedicated hydrologic model sensitivity with a new approach in hydrology, allowing model outputs variance decomposition for temporal patterns of parameter sensitivity analysis. Such approaches enable ranking of uncertainty sources for nonlinear and nonmonotonic mappings with a low computational cost. Hydrologic model and sensitivity analysis are used as learning tools on a large flash flood dataset. With Nash performances above 0.73 on average for this extended set of 10 validation events, the five sensitive parameters of MARINE process-oriented distributed model are analyzed. This contribution shows that soil depth explains more than 80% of model output variance when most hydrographs are peaking. Moreover, the lateral subsurface transfer is responsible for 80% of model variance for some catchment-flood events’ hydrographs during slow-declining limbs. The unexplained variance of model output representing interactions between parameters reveals to be very low during modeled flood peaks and informs that model parsimonious parameterization is appropriate to tackle the problem of flash floods. Interactions observed after model initialization or rainfall intensity peaks incite to improve water partition representation between flow components and initialization itself. This paper gives a practical framework for application of this method to other models, landscapes and climatic conditions, potentially helping to improve processes understanding and representation

    Uncertainty and sensitivity analysis of functional risk curves based on Gaussian processes

    Full text link
    A functional risk curve gives the probability of an undesirable event as a function of the value of a critical parameter of a considered physical system. In several applicative situations, this curve is built using phenomenological numerical models which simulate complex physical phenomena. To avoid cpu-time expensive numerical models, we propose to use Gaussian process regression to build functional risk curves. An algorithm is given to provide confidence bounds due to this approximation. Two methods of global sensitivity analysis of the models' random input parameters on the functional risk curve are also studied. In particular, the PLI sensitivity indices allow to understand the effect of misjudgment on the input parameters' probability density functions

    Uncertainty Quantification in the Directed Energy Deposition Process Using Deep Learning-Based Probabilistic Approach

    Full text link
    peer reviewedThis study quantifies the effects of uncertainty raised from process parameters, material properties, and boundary conditions in the directed energy deposition (DED) process of M4 High-Speed Steel using deep learning (DL)-based probabilistic approach. A DL-based surrogate model is first constructed using the data obtained from a finite element (FE) model, which was validated against experiment. Then, sources of uncertainty are characterized by the probabilistic method and are propagated by the Monte-Carlo (MC) method. Lastly, the sensitivity analysis (SA) using the variance-based method is performed to identify the parameters inducing the most uncertainty to the melting pool depth. Using the DL-based surrogate model instead of solely FE model significantly reduces the computational time in the MC simulation. The results indicate that all sources of uncertainty contribute to a substantial variation on the final printed product quality. Moreover, we find that the laser power, the convection, the scanning speed, and the thermal conductivity contribute the most uncertainties on the melting pool depth based on the SA results. These findings can be used as insights for the process parameter optimization of the DED process.EDPOM

    Uncertainty and sensitivity analysis in quantitative pest risk assessments : practical rules for risk assessors

    Get PDF
    Quantitative models have several advantages compared to qualitative methods for pest risk assessments (PRA). Quantitative models do not require the definition of categorical ratings and can be used to compute numerical probabilities of entry and establishment, and to quantify spread and impact. These models are powerful tools, but they include several sources of uncertainty that need to be taken into account by risk assessors and communicated to decision makers. Uncertainty analysis (UA) and sensitivity analysis (SA) are useful for analyzing uncertainty in models used in PRA, and are becoming more popular. However, these techniques should be applied with caution because several factors may influence their results. In this paper, a brief overview of methods of UA and SA are given. As well, a series of practical rules are defined that can be followed by risk assessors to improve the reliability of UA and SA results. These rules are illustrated in a case study based on the infection model of Magarey et al. (2005) where the results of UA and SA are shown to be highly dependent on the assumptions made on the probability distribution of the model inputs

    Managing structural uncertainty in health economic decision models: a discrepancy approach

    Get PDF
    Healthcare resource allocation decisions are commonly informed by computer model predictions of population mean costs and health effects. It is common to quantify the uncertainty in the prediction due to uncertain model inputs, but methods for quantifying uncertainty due to inadequacies in model structure are less well developed. We introduce an example of a model that aims to predict the costs and health effects of a physical activity promoting intervention. Our goal is to develop a framework in which we can manage our uncertainty about the costs and health effects due to deficiencies in the model structure. We describe the concept of `model discrepancy': the difference between the model evaluated at its true inputs, and the true costs and health effects. We then propose a method for quantifying discrepancy based on decomposing the cost-effectiveness model into a series of sub-functions, and considering potential error at each sub-function. We use a variance based sensitivity analysis to locate important sources of discrepancy within the model in order to guide model refinement. The resulting improved model is judged to contain less structural error, and the distribution on the model output better reflects our true uncertainty about the costs and effects of the intervention
    corecore