833 research outputs found

    Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors

    Get PDF
    Meaningful quantification of data and structural uncertainties in conceptual rainfall-runoff modeling is a major scientific and engineering challenge. This paper focuses on the total predictive uncertainty and its decomposition into input and structural components under different inference scenarios. Several Bayesian inference schemes are investigated, differing in the treatment of rainfall and structural uncertainties, and in the precision of the priors describing rainfall uncertainty. Compared with traditional lumped additive error approaches, the quantification of the total predictive uncertainty in the runoff is improved when rainfall and/or structural errors are characterized explicitly. However, the decomposition of the total uncertainty into individual sources is more challenging. In particular, poor identifiability may arise when the inference scheme represents rainfall and structural errors using separate probabilistic models. The inference becomes ill‐posed unless sufficiently precise prior knowledge of data uncertainty is supplied; this ill‐posedness can often be detected from the behavior of the Monte Carlo sampling algorithm. Moreover, the priors on the data quality must also be sufficiently accurate if the inference is to be reliable and support meaningful uncertainty decomposition. Our findings highlight the inherent limitations of inferring inaccurate hydrologic models using rainfall‐runoff data with large unknown errors. Bayesian total error analysis can overcome these problems using independent prior information. The need for deriving independent descriptions of the uncertainties in the input and output data is clearly demonstrated.Benjamin Renard, Dmitri Kavetski, George Kuczera, Mark Thyer, and Stewart W. Frank

    Identifiability of parameters in latent structure models with many observed variables

    Full text link
    While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstrate a general approach for establishing identifiability utilizing algebraic arguments. A theorem of J. Kruskal for a simple latent-class model with finite state space lies at the core of our results, though we apply it to a diverse set of models. These include mixtures of both finite and nonparametric product distributions, hidden Markov models and random graph mixture models, and lead to a number of new results and improvements to old ones. In the parametric setting, this approach indicates that for such models, the classical definition of identifiability is typically too strong. Instead generic identifiability holds, which implies that the set of nonidentifiable parameters has measure zero, so that parameter inference is still meaningful. In particular, this sheds light on the properties of finite mixtures of Bernoulli products, which have been used for decades despite being known to have nonidentifiable parameters. In the nonparametric setting, we again obtain identifiability only when certain restrictions are placed on the distributions that are mixed, but we explicitly describe the conditions.Comment: Published in at http://dx.doi.org/10.1214/09-AOS689 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Half-trek criterion for generic identifiability of linear structural equation models

    Get PDF
    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations and bidirected edges indicate possible correlations among noise terms. We study parameter identifiability in these models, that is, we ask for conditions that ensure that the edge coefficients and correlations appearing in a linear structural equation model can be uniquely recovered from the covariance matrix of the associated distribution. We treat the case of generic identifiability, where unique recovery is possible for almost every choice of parameters. We give a new graphical condition that is sufficient for generic identifiability and can be verified in time that is polynomial in the size of the graph. It improves criteria from prior work and does not require the directed part of the graph to be acyclic. We also develop a related necessary condition and examine the "gap" between sufficient and necessary conditions through simulations on graphs with 25 or 50 nodes, as well as exhaustive algebraic computations for graphs with up to five nodes.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1012 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore