22 research outputs found

    Water table mapping using Bayesian data fusion with auxiliary data

    Full text link
    Water table elevations are usually sampled in space using piezometric measurements, that are unfortunately both expensive to drill and monitor and consequently are thus scarce over space. Most of the time, piezometric data are sparsely distributed over large areas, thus providing limited direct information about the level of the corresponding water table. As a consequence, there is a real need for approaches that are able at the same time to (i) provide spatial predictions at unsampled locations and (ii) enable the user to account in a meaningful way for all potentially available secondary information sources that are in some way related to water table elevations. Advantages of these auxiliary information sources are their cheapest prices and their better spatial coverage, thus allowing the user to improve the quality of subsequent mapping provide that a meaningful way of merging these data is made available. In this paper, a recently developed Bayesian Data Fusion technique (BDF) is applied to the problem of water table spatial mapping. After a brief presentation of the underlying theory, specific assumptions are made and discussed in order to account for a digital elevation model as well as for the geometry of a corresponding river network. Based on a data set for the Dyle basin in the north part of Belgium, the suggested model is then implemented by accounting for two secondary information sources, i.e., a spatially exhaustive high resolution digital elevation model and a metric allowing us to account for the whole geometry of the river network as auxiliary information. Results are compared to those of standard spatial mapping techniques like ordinary kriging and cokriging. Respective accuracies and precisions of these estimators are finally evaluated using a leave-one-out cross-validation procedure. They show one one side the obvious benefit of incorporating additional information sources, but more interesting they also emphasize the limitations of traditional multivariate methods (like, e.g., cokriging methods) that fail to efficiently take benefit of these addditional information due to restrictive modeling hypotheses, whereas BDF has no difficulty on that side. Though the BDF methodology was illustrated here for the integration of only two secondary information sources, the method can also be applied for incorporating an arbitrary number of auxiliary variables. It has also been successfully applied in other fields like remote-sensing and air pollution, thus opening new avenues for the important and general topic of data integration in a spatial mapping context. Extension towards a space-time context for dynamic mapping is also possible

    Bayesian data fusion in a spatial prediction context: a general formulation

    No full text
    In spite of the exponential growth in the amount of data that one may expect to provide greater modeling and predictions opportunities, the number and diversity of sources over which this information is fragmented is growing at an even faster rate. As a consequence, there is real need for methods that aim at reconciling them inside an epistemically sound theoretical framework. In a statistical spatial prediction framework, classical methods are based on a multivariate approach of the problem, at the price of strong modeling hypotheses. Though new avenues have been recently opened by focusing on the integration of uncertain data sources, to the best of our knowledges there have been no systematic attemps to explicitly account for information redundancy through a data fusion procedure. Starting from the simple concept of measurement errors, this paper proposes an approach for integrating multiple information processing as a part of the prediction process itself through a Bayesian approach. A general formulation is first proposed for deriving the prediction distribution of a continuous variable of interest at unsampled locations using on more or less uncertain (soft) information at neighboring locations. The case of multiple information is then considered, with a Bayesian solution to the problem of fusing multiple information that are provided as separate conditional probability distributions. Well-known methods and results are derived as limit cases. The convenient hypothesis of conditional independence is discussed by the light of information theory and maximum entropy principle, and a methodology is suggested for the optimal selection of the most informative subset of information, if needed. Based on a synthetic case study, an application of the methodology is presented and discussed

    Bayesian data fusion for adaptable image pansharpening

    No full text
    Currently, most optical Earth observation satellites carry both a panchromatic sensor and a set of lower spatial resolution multispectral sensors. In order to benefit from both sources of information, several pansharpening methods have been developed to produce a multispectral image at the spatial resolution of the panchromatic band. The aim of this paper is to suggest a novel approach to the pansharpening problem within a Bayesian framework. This Bayesian data fusion (BDF) method relies on statistical relationships between the various spectral bands and the panchromatic band without suffering from restricting modeling hypotheses. Furthermore, it allows the user to weight the spectral and panchromatic information with respect to either visual or quantitative criteria, which leads to adaptable results according to users' needs and study areas. The performance of this approach was compared to existing methods based on markedly different subset images from very high spatial resolution IKONOS images. Results showed that BDF yielded the highest spectral consistency. Furthermore, small details were adequately added to the pansharpened images with little artifact as compared to those created using wavelet-based methods. Finally, the method was fast and easy to implement owing to its straightforward formulation. As it does not have any intrinsic limitations on the type of data to be processed or the number of bands to be merged, it also appears to be very promising for. optical/SAR or hyperspectral image fusion

    Bayesian data fusion for space-time prediction of air pollutants: The case of NO2 in Belgium

    No full text
    In the beginning of the 21st century, it is obvious that health and environmental matters are among the most important political and societal topics. The various kinds of pollution (e.g. air pollution, soil pollution, water contamination) are responsible for significant health and environmental degradation. In order to adequately deal with pollution issues, it is important to better understand the acting processes and to be able to account for specific knowledge about the pollutant. Thanks to this, it will be possible to forecast pollutant concentrations so that efficient actions can be rapidly taken. Based on a Bayesian data fusion (BDF) framework, the present paper proposes a methodology for air pollutant forecasting using the space-time properties of the process and several secondary information sources that contribute to a better understanding of the pollutant behavior (e.g. meteorological variables and anthropogenic activities). Consequently, the present work can contribute to improving the representation and the forecast of pollutant fields. Moreover the developed approach also permits to predict the probability of exceeding a given threshold, as required in official regulations for some pollutants (e.g. the European directives). The BDF framework is applied here to the case of space-time predictions of air concentrations of nitrogen dioxide (NO2) in Belgium. After a detailed description of some specific assumptions, results showed that BDF is able to successfully account for secondary information sources, thus leading to meaningful NO2 predictions. (C) 2009 Elsevier Ltd. All rights reserved

    Thematic accuracy assessment of geographic object-based image classification

    No full text
    Geographic object-based image analysis is an image-processing method where groups of spatially adjacent pixels are classified as if they were behaving as a whole unit. This approach raises concerns about the way subsequent validation studies must be conducted. Indeed, classical point-based sampling strategies based on the spatial distribution of sample points (using systematic, probabilistic or stratified probabilistic sampling) do not rely on the same concept of objects and may prove to be less appropriate than the methods explicitly built on the concept of objects used for the classification step. In this study, an original object-based sampling strategy is compared with other approaches used in the literature for the thematic accuracy assessment of object-based classifications. The new sampling scheme and sample analysis are founded on a sound theoretical framework based on few working hypotheses. The performance of the sampling strategies is quantified using simulated object-based classifications results of a Quickbird imagery. The bias and the variance of the overall accuracy estimates were used as indicators of the method's benefits. The main advantage of the object-based predictor of the overall accuracy is its performance: for a given confidence interval, it requires fewer sampling units than the other methods. In many cases, this can help to noticeably reduce the sampling effort. Beyond the efficiency, more conceptual differences between point-based and object-based samplings are discussed. First, geolocation errors do not influence the object-based thematic accuracy as they do for point-based accuracy. These errors need to be addressed independently to provide the geolocation precision. Second, the response design is more complex in object-based accuracy assessment. This is interesting for complex classes but might be an issue in case of large segmentation errors. Finally, there is a larger likelihood to reach the minimum sample size for each class with an object-based sampling than in a point-based sampling. Further work is necessary to reach the same suitability than point-based sampling for pixel-based classification, but this pioneer study shows that object-based sampling could be implemented within a statistically sound framework
    corecore