2,336 research outputs found

    Issues of scale for environmental indicators

    Get PDF
    The value of environmental indicators largely depends upon the spatial and temporal scale that they represent. Environmental indicators are dependent upon data availability and also upon the scale for which statements are required. As these may not match, changes in scales may be necessary. In this paper a geostatistical approach to analyse quantitative environmental indicators has been used. Scales, defined in terms of resolution and procedures, are presented to translate data from one scale to another: upscaling to change from high resolution data towards a low resolution, and downscaling for the inverse process. The study is illustrated with three environmental indicators. The first concerns heavy metals in the environment, where the zinc content is used as the indicator. Initially, data were present at a 1km2 resolution, and were downscaled to 1m2 resolution. High resolution data collected later showed a reasonable correspondence with the downscaled data. Available covariates were also used. The second example is from the Rothamsted’s long-term experiments. Changes in scale are illustrated by simulating reduced data sets from the full data set on grass cuts. A simple regression model related the yield from these condcut to that of the first cut in the cropping season. Reducing data availability (upscaling) resulted in poor estimates of the regression coefficients. The final example is on nitrate surpluses on Danish farms. Data at the field level are upscaled to the farm level, and the dispersion variance indicates differences between different farms. Geostatistical methods were useful to define, change and determine the most appropriate scales for environmental variables in space and in time

    High-Dimensional Bayesian Geostatistics

    Full text link
    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ∼n\sim n floating point operations (flops), where nn the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings

    A disposition of interpolation techniques

    Get PDF
    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated values, 3) incorporation of ancillary information, and 4) incorporation of process knowledge. The described methods include inverse distance weighting, nearest neighbour methods, geostatistical interpolation methods, Kalman filter methods, Bayesian Maximum Entropy methods, etc. The applicability of methods in aggregation (upscaling) and disaggregation (downscaling) is discussed. Software for interpolation is described. The application of interpolation techniques is illustrated in two case studies: temporal interpolation of indicators for ecological water quality, and spatio-temporal interpolation and aggregation of pesticide concentrations in Dutch surface waters. A valuable next step will be to construct a decision tree or decision support system, that guides the environmental scientist to easy-to-use software implementations that are appropriate to solve their interpolation problem. Validation studies are needed to assess the quality of interpolated values, and the quality of information on uncertainty provided by the interpolation method
    • …
    corecore