10,058 research outputs found

    Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors

    Get PDF
    Meaningful quantification of data and structural uncertainties in conceptual rainfall-runoff modeling is a major scientific and engineering challenge. This paper focuses on the total predictive uncertainty and its decomposition into input and structural components under different inference scenarios. Several Bayesian inference schemes are investigated, differing in the treatment of rainfall and structural uncertainties, and in the precision of the priors describing rainfall uncertainty. Compared with traditional lumped additive error approaches, the quantification of the total predictive uncertainty in the runoff is improved when rainfall and/or structural errors are characterized explicitly. However, the decomposition of the total uncertainty into individual sources is more challenging. In particular, poor identifiability may arise when the inference scheme represents rainfall and structural errors using separate probabilistic models. The inference becomes ill‐posed unless sufficiently precise prior knowledge of data uncertainty is supplied; this ill‐posedness can often be detected from the behavior of the Monte Carlo sampling algorithm. Moreover, the priors on the data quality must also be sufficiently accurate if the inference is to be reliable and support meaningful uncertainty decomposition. Our findings highlight the inherent limitations of inferring inaccurate hydrologic models using rainfall‐runoff data with large unknown errors. Bayesian total error analysis can overcome these problems using independent prior information. The need for deriving independent descriptions of the uncertainties in the input and output data is clearly demonstrated.Benjamin Renard, Dmitri Kavetski, George Kuczera, Mark Thyer, and Stewart W. Frank

    Parameter Estimation and Uncertainty Quantication for an Epidemic Model

    Get PDF
    We examine estimation of the parameters of Susceptible-Infective-Recovered (SIR) models in the context of least squares. We review the use of asymptotic statistical theory and sensitivity analysis to obtain measures of uncertainty for estimates of the model parameters and the basic reproductive number (R0 )—an epidemiologically signiïŹcant parameter grouping. We ïŹnd that estimates of diïŹ€erent parameters, such as the transmission parameter and recovery rate, are correlated, with the magnitude and sign of this correlation depending on the value of R0. Situations are highlighted in which this correlation allows R0 to be estimated with greater ease than its constituent parameters. Implications of correlation for parameter identiïŹability are discussed. Uncertainty estimates and sensitivity analysis are used to investigate how the frequency at which data is sampled aïŹ€ects the estimation process and how the accuracy and uncertainty of estimates improves as data is collected over the course of an outbreak. We assess the informativeness of individual data points in a given time series to determine when more frequent sampling (if possible) would prove to be most beneïŹcial to the estimation process. This technique can be used to design data sampling schemes in more general contexts

    Identifiability issues of age-period and age-period-cohort models of the Lee-Carter type

    Full text link
    The predominant way of modelling mortality rates is the Lee-Carter model and its many extensions. The Lee-Carter model and its many extensions use a latent process to forecast. These models are estimated using a two-step procedure that causes an inconsistent view on the latent variable. This paper considers identifiability issues of these models from a perspective that acknowledges the latent variable as a stochastic process from the beginning. We call this perspective the plug-in age-period or plug-in age-period-cohort model. Defining a parameter vector that includes the underlying parameters of this process rather than its realisations, we investigate whether the expected values and covariances of the plug-in Lee-Carter models are identifiable. It will be seen, for example, that even if in both steps of the estimation procedure we have identifiability in a certain sense it does not necessarily carry over to the plug-in models

    Spatial two tissue compartment model for DCE-MRI

    Full text link
    In the quantitative analysis of Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) compartment models allow to describe the uptake of contrast medium with biological meaningful kinetic parameters. As simple models often fail to adequately describe the observed uptake behavior, more complex compartment models have been proposed. However, the nonlinear regression problem arising from more complex compartment models often suffers from parameter redundancy. In this paper, we incorporate spatial smoothness on the kinetic parameters of a two tissue compartment model by imposing Gaussian Markov random field priors on them. We analyse to what extent this spatial regularisation helps to avoid parameter redundancy and to obtain stable parameter estimates. Choosing a full Bayesian approach, we obtain posteriors and point estimates running Markov Chain Monte Carlo simulations. The proposed approach is evaluated for simulated concentration time curves as well as for in vivo data from a breast cancer study

    Adaptive optimal operation of a parallel robotic liquid handling station

    Get PDF
    Results are presented from the optimal operation of a fully automated robotic liquid handling station where parallel experiments are performed for calibrating a kinetic fermentation model. To increase the robustness against uncertainties and/or wrong assumptions about the parameter values, an iterative calibration and experiment design approach is adopted. Its implementation yields a stepwise reduction of parameter uncertainties together with an adaptive redesign of reactor feeding strategies whenever new measurement information is available. The case study considers the adaptive optimal design of 4 parallel fed-batch strategies implemented in 8 mini-bioreactors. Details are given on the size and complexity of the problem and the challenges related to calibration of over-parameterized models and scarce and non-informative measurement data. It is shown how methods for parameter identifiability analysis and numerical regularization can be used for monitoring the progress of the experimental campaigns in terms of generated information regarding parameters and selection of the best fitting parameter subset.BMBF, 02PJ1150, Verbundprojekt: Plattformtechnologien fĂŒr automatisierte Bioprozessentwicklung (AutoBio); Teilprojekt: Automatisierte Bioprozessentwicklung am Beispiel von neuen Nukleosidphosphorylase
    • 

    corecore