4 research outputs found

    Analyzing the Data-Rich-But-Information-Poor Syndrome in Dutch Water Management in Historical Perspective

    Get PDF
    Water quality monitoring has developed over the past century from an unplanned, isolated activity into an important discipline in water management. This development also brought about a discontent between information users and information producers about the usefulness and usability of information, in literature often referred to as the data-rich-but-information-poor syndrome. This article aims to gain a better understanding of this issue by studying the developments over some five decades of Dutch national water quality monitoring, by analyzing four studies in which the role and use of information are discussed from different perspectives, and by relating this to what is considered in literature as useful information. The article concludes that a “water information gap” exists which is rooted in different mutual perceptions and expectations between the two groups on what useful information is, that can be overcome by improving the communication. Such communication should be based on willingness to understand and deal with different mindframes and should be based on a methodology that guides and structures the interactions

    Efficient and robust analysis of interlaboratory studies

    No full text
    In this paper we present the ab-initio derivation of an estimator for the mean and variance of a sample of data, such as obtained from proficiency tests. This estimator has already been used for some time in this kind of analyses, but a thorough derivation together with a detailed analysis of its properties is missing until now. The estimator uses the information contained in data including uncertainty, represented via probability density functions (pdfs). An implementation of the approach is given that can be used if the uncertainty information is not available; the so-called normal distribution approach (NDA). The present estimation procedure is based on calculating the centroid of the ensemble of pdfs. This centroid is obtained by solving the eigenvalue problem for the so-called similarity matrix. Elements of this matrix measure the similarity (or overlap) between different pdfs in terms of the Bhattacharyya coefficient. Since evaluation of an eigenvalue problem is standard nowadays, the method is extremely fast. The first and second moments of the centroid pdf are used to obtain the mean and variance of the dataset. The properties of the estimator are extensively analyzed. We derive its variance and show the connection between the present estimator and Principal Component Analysis. Furthermore, we study its behavior in several limiting cases, as met in data that are very coherent or very incoherent, and check its consistency. In particular, we investigate how sensitive the estimator is for outliers, investigating its breakdown point. In the normal distribution approach the breakdown point of the estimator is shown to be optimal, i.e., 50%. The largest eigenvalue(s) of the similarity matrix appear(s) to provide important information. If the largest eigenvalue is close to the dimension of the matrix, this indicates that the data are very coherent, so they lie close to each other with similar uncertainties. If there are two (or more) largest eigenvalues with (nearly) equal values, this indicates that the data fall apart in two (or more) clusters
    corecore