14 research outputs found

    Valuing map validation: the need for rigorous land cover map accuracy assessment in economic valuations of ecosystem services

    Get PDF
    Valuations of ecosystem services often use data on land cover class areal extent. Area estimates from land cover maps may be biased by misclassification error resulting in flawed assessments and inaccurate valuations. Adjustment for misclassification error is possible for maps subjected to a rigorous validation program including an accuracy assessment. Unfortunately, validation is rare and/or poorly undertaken as often not regarded as a high priority. The benefit of map validation and hence its value is indicated with two maps. The International Geosphere Biosphere Programme’s DISCover map was used to estimate wetland value globally. The latter changed from US1.92trillionyr1toUS1.92 trillion yr-1 to US2.79 trillion yr-1 when adjusted for misclassification bias. For the conterminous USA, ecosystem services value based on six land cover classes from the National Land Cover Database (2006) changed from US1118billionyr1toUS1118 billion yr-1 to US600 billion yr-1 after adjustment for misclassification bias. The effect of error-adjustment on the valuations indicates the value of map validation to rigorous evidence-based science and policy work in relation to aspects of natural capital. The benefit arising from validation was orders of magnitude larger than mapping costs and it is argued that validation should be a high priority in mapping programs and inform valuations

    استفاده از مدل زیر پیکسل جاذبه به منظور افزایش قدرت تفکیک مکانی مدل رقومی ارتفاع (DEM)

    Get PDF
    افزایش قدرت تفکیک مکانی به منظور افزایش میزان اطلاعات در مدل رقومی ارتفاع (DEM) از جمله مهمترین موضوعات در ژئومورفولوژی کمی محسوب می‌شود. تاکنون مدل‌های مختلفی به منظور افزایش قدرت تفکیک مکانی ارائه شده است که از بین مدل‌ها، مدل جاذبه به عنوان جدیدترین مدل، دارای دقت بسیار بالایی می‌باشد. این مدل برای اولین بار به منظور افزایش قدرت تفکیک مکانی بر روی تصاویر ماهواره‌ای استفاده شده است. در این تحقیق از مدل جاذبه برای اولین بار به منظور افزایش قدرت تفکیک مکانی DEM استفاده شد. در بررسی حاضر، از دو مدل همسایگی پیکسل‌های مماس (Touching) و مدل همسایگی چهارگانه (Quadrant) به منظور تخمین مقادیر زیر پیکسل ها استفاده گردید. در مدل جاذبه احتیاجی به کالیبره کردن و آموزش الگوریتم همانند الگوریتم‌های یادگیری ماشین نیست، این امر موجب می‌شود که زمان محاسبات برای اجرای الگوریتم کم شود. پس از تولید تصاویر خروجی برای زیر پیکسل‌ها، در مقیاس های 2، 3 و4 با همسایگی‌های متفاوت، بهترین مقیاس با مناسب‌ترین نوع همسایگی با استفاده از نقاط کنترل زمینی تعیین شد و مقادیر RMSE برای آن‌ها محاسبه شد. تعداد کل نقاط کنترل زمین مستخرج از عملیات نقشه برداری، 2118 نقطه بود. مقدار RMSE برای هر DEM به صورت جداگانه محاسبه شد. نتایج نشان داد که با استفاده از مدل جاذبه صحت تصاویر خروجی بهبود بخشیده شده و همچنین قدرت تفکیک مکانی آن‌ها نیز افزایش پیدا کرده است. بر اساس نتایج از بین مقیاس‌ها با همسایگی‌های مختلف، مقیاس 3 و مدل همسایگی چهارگانه نسبت به سایر روش‌ها دارای بیشترین دقت با کمترین میزان RMSE (54/5) برای DEM 30 متر و DEM  90 متر (13/9) می‌باشد

    Enhancing the spatial resolution of satellite-derived land surface temperature mapping for urban areas

    Get PDF
    Land surface temperature (LST) is an important environmental variable for urban studies such as those focused on the urban heat island (UHI). Though satellite-derived LST could be a useful complement to traditional LST data sources, the spatial resolution of the thermal sensors limits the utility of remotely sensed thermal data. Here, a thermal sharpening technique is proposed which could enhance the spatial resolution of satellite-derived LST based on super-resolution mapping (SRM) and super-resolution reconstruction (SRR). This method overcomes the limitation of traditional thermal image sharpeners that require fine spatial resolution images for resolution enhancement. Furthermore, environmental studies such as UHI modelling typically use statistical methods which require the input variables to be independent, which means the input LST and other indices should be uncorrelated. The proposed Super-Resolution Thermal Sharpener (SRTS) does not rely on any surface index, ensuring the independence of the derived LST to be as independent as possible from the other variables that UHI modelling often requires. To validate the SRTS, its performance is compared against that of four popular thermal sharpeners: the thermal sharpening algorithm (TsHARP), adjusted stratified stepwise regression method (Stepwise), pixel block intensity modulation (PBIM), and emissivity modulation (EM). The privilege of using the combination of SRR and SRM was also verified by comparing the accuracy of SRTS with sharpening process only based on SRM or SRR. The results show that the SRTS can enhance the spatial resolution of LST with a magnitude of accuracy that is equal or even superior to other thermal sharpeners, even without requiring fine spatial resolution input. This shows the potential of SRTS for application in conditions where only limited meteorological data sources are available yet where fine spatial resolution LST is desirable

    Valuing map validation: the need for rigorous land cover map accuracy assessment in economic valuations of ecosystem services

    Get PDF
    Valuations of ecosystem services often use data on land cover class areal extent. Area estimates from land cover maps may be biased by misclassification error resulting in flawed assessments and inaccurate valuations. Adjustment for misclassification error is possible for maps subjected to a rigorous validation program including an accuracy assessment. Unfortunately, validation is rare and/or poorly undertaken as often not regarded as a high priority. The benefit of map validation and hence its value is indicated with two maps. The International Geosphere Biosphere Programme’s DISCover map was used to estimate wetland value globally. The latter changed from US1.92trillionyr1toUS1.92 trillion yr-1 to US2.79 trillion yr-1 when adjusted for misclassification bias. For the conterminous USA, ecosystem services value based on six land cover classes from the National Land Cover Database (2006) changed from US1118billionyr1toUS1118 billion yr-1 to US600 billion yr-1 after adjustment for misclassification bias. The effect of error-adjustment on the valuations indicates the value of map validation to rigorous evidence-based science and policy work in relation to aspects of natural capital. The benefit arising from validation was orders of magnitude larger than mapping costs and it is argued that validation should be a high priority in mapping programs and inform valuations

    Super-resolution generative adversarial network based on the dual dimension attention mechanism for biometric image super-resolution

    Get PDF
    There exist many types of intelligent security sensors in the environment of the Internet of Things (IoT) and cloud computing. Among them, the sensor for biometrics is one of the most important types. Biometric sensors capture the physiological or behavioral features of a person, which can be further processed with cloud computing to verify or identify the user. However, a low-resolution (LR) biometrics image causes the loss of feature details and reduces the recognition rate hugely. Moreover, the lack of resolution negatively affects the performance of image-based biometric technology. From a practical perspective, most of the IoT devices suffer from hardware constraints and the low-cost equipment may not be able to meet various requirements, particularly for image resolution, because it asks for additional storage to store high-resolution (HR) images, and a high bandwidth to transmit the HR image. Therefore, how to achieve high accuracy for the biometric system without using expensive and high-cost image sensors is an interesting and valuable issue in the field of intelligent security sensors. In this paper, we proposed DDA-SRGAN, which is a generative adversarial network (GAN)-based super-resolution (SR) framework using the dual-dimension attention mechanism. The proposed model can be trained to discover the regions of interest (ROI) automatically in the LR images without any given prior knowledge. The experiments were performed on the CASIA-Thousand-v4 and the CelebA datasets. The experimental results show that the proposed method is able to learn the details of features in crucial regions and achieve better performance in most cases

    An iterative interpolation deconvolution algorithm for superresolution land cover mapping

    Get PDF
    Super-resolution mapping (SRM) is a method to produce a fine spatial resolution land cover map from coarse spatial resolution remotely sensed imagery. A popular approach for SRM is a two-step algorithm, which first increases the spatial resolution of coarse fraction images by interpolation, and then determines class labels of fine resolution pixels using the maximum a posteriori (MAP) principle. By constructing a new image formation process that establishes the relationship between observed coarse resolution fraction images and the latent fine resolution land cover map, it is found that the MAP principle only matches with area-to-point interpolation algorithms, and should be replaced by de-convolution if an area-to-area interpolation algorithm is to be applied. A novel iterative interpolation de-convolution (IID) SRM algorithm is proposed. The IID algorithm first interpolates coarse resolution fraction images with an area-to-area interpolation algorithm, and produces an initial fine resolution land cover map by de-convolution. The fine spatial resolution land cover map is then updated by re-convolution, back-projection and de-convolution iteratively until the final result is produced. The IID algorithm was evaluated with simulated shapes, simulated multi-spectral images, and degraded Landsat images, including comparison against three widely used SRM algorithms: pixel swapping, bilinear interpolation, and Hopfield neural network. Results show that the IID algorithm can reduce the impact of fraction errors, and can preserve the patch continuity and the patch boundary smoothness, simultaneously. Moreover, the IID algorithm produced fine resolution land cover maps with higher accuracies than those produced by other SRM algorithms

    Sub-pixel mapping with point constraints

    Get PDF
    Remote sensing images contain abundant land cover information. Due to the complex nature of land cover, however, mixed pixels exist widely in remote sensing images. Sub-pixel mapping (SPM) is a technique for predicting the spatial distribution of land cover classes within mixed pixels. As an ill-posed inverse problem, the uncertainty of prediction cannot be eliminated and hinders the production of accurate sub-pixel maps. In contrast to conventional methods that use continuous geospatial information (e.g., images) to enhance SPM, in this paper, a SPM method with point constraints into SPM is proposed. The method of fusing point constraints is implemented based on the pixel swapping algorithm (PSA) and utilizes the auxiliary point information to reduce the uncertainty in the SPM process and increase map accuracy. The point data are incorporated into both the initialization and optimization processes of PSA. Experiments were performed on three images to validate the proposed method. The influences of the performances were also investigated under different numbers of point data, different spatial characters of land cover and different zoom factors. The results show that by using the point data, the proposed SPM method can separate more small-sized targets from aggregated artifacts and the accuracies are increased obviously. The proposed method is also more accurate than the advanced radial basis function interpolation-based method. The advantage of using point data is more evident when the point data size and scale factor are large and the spatial autocorrelation of the land cover is small. As the amount of point data increases, however, the increase in accuracy becomes less noticeable. Furthermore, the SPM accuracy can still be increased even if the point data and coarse proportions contain errors. © 2020 Elsevier Inc

    Spatial-temporal super-resolution land cover mapping with a local spatial-temporal dependence model

    Get PDF
    The mixed pixel problem is common in remote sensing. A soft classification can generate land cover class fraction images that illustrate the areal proportions of the various land cover classes within pixels. The spatial distribution of land cover classes within each mixed pixel is, however, not represented. Super-resolution land cover mapping (SRM) is a technique to predict the spatial distribution of land cover classes within the mixed pixel using fraction images as input. Spatial-temporal SRM (STSRM) extends the basic SRM to include a temporal dimension by using a finer-spatial resolution land cover map that pre-or postdates the image acquisition time as ancillary data. Traditional STSRM methods often use one land cover map as the constraint, but neglect the majority of available land cover maps acquired at different dates and of the same scene in reconstructing a full state trajectory of land cover changes when applying STSRM to time series data. In addition, the STSRM methods define the temporal dependence globally, and neglect the spatial variation of land cover temporal dependence intensity within images. A novel local STSRM (LSTSRM) is proposed in this paper. LSTSRM incorporates more than one available land cover map to constrain the solution, and develops a local temporal dependence model, in which the temporal dependence intensity may vary spatially. The results show that LSTSRM can eliminate speckle-like artifacts and reconstruct the spatial patterns of land cover patches in the resulting maps, and increase the overall accuracy compared with other STSRM methods

    ENHANCING INVERSE MODELING IN HYDROGEOLOGY WITH MODERN MACHINE LEARNING ALGORITHMS

    Get PDF
    Inverse estimation of spatially distributed parameter fields plays an important role in many scientific disciplines including hydrogeology, geophysics, earth science, environmental engineering, etc. Classic stochastic sampling approaches such as Markov Chain Monte Carlo (MCMC) and optimization approaches such as geostatistical approach (GA) can solve inverse problems with a modest number of unknowns. However, we may face challenges when it comes to large-scale, highly heterogeneous fields or fields with special characteristics, such as connected preferential paths. In this thesis, firstly, we develop a new data augmentation approach, i.e., fast conditional image quilting to synthesize realizations based on limited measurements; and this approach is later used to generate channelized training images to support the inverse modeling research study. Secondly, unlike MCMC and optimization approaches that require many forward model evaluations in each iteration, we develop two neural network inverse models on full dimensions (NNI) and principal components (NNPCI) to directly explore the inverse relationships between indirect measurements such as hydraulic heads and the underlying parameter fields such as hydraulic conductivity. We successfully apply our neural network models to large-scale hydraulic tomography experiments to estimate spatially distributed hydraulic conductivity. In particular, with the help of principal component analysis (PCA), the number of neurons in the last layer of NNPCI is the same as that of retained principal components, thus further accelerating the algorithm and making the system scalable regardless of large-scale unknown field parameters. NNI also demonstrates satisfactory inverse results on full dimensions for both Gaussian and non-Gaussian fields with channelized patterns. The major computational advantage for NNI and NNPCI is that the training data can be generated by independent forward model simulations that can be done efficiently using parallel computing. Finally, to account for errors from different sources, including input errors, model structure errors, model parameters errors, etc., we incorporate Bayesian theorem to the neural network models for uncertainty analysis. The system behaves more stably and consistently on varying spatial and temporal scales. The developed approaches are successfully validated with synthetic and field cases.Ph.D
    corecore