39 research outputs found

    Spatiotemporal subpixel mapping of time-series images

    Get PDF
    Land cover/land use (LCLU) information extraction from multitemporal sequences of remote sensing imagery is becoming increasingly important. Mixed pixels are a common problem in Landsat and MODIS images that are used widely for LCLU monitoring. Recently developed subpixel mapping (SPM) techniques can extract LCLU information at the subpixel level by dividing mixed pixels into subpixels to which hard classes are then allocated. However, SPM has rarely been studied for time-series images (TSIs). In this paper, a spatiotemporal SPM approach was proposed for SPM of TSIs. In contrast to conventional spatial dependence-based SPM methods, the proposed approach considers simultaneously spatial and temporal dependences, with the former considering the correlation of subpixel classes within each image and the latter considering the correlation of subpixel classes between images in a temporal sequence. The proposed approach was developed assuming the availability of one fine spatial resolution map which exists among the TSIs. The SPM of TSIs is formulated as a constrained optimization problem. Under the coherence constraint imposed by the coarse LCLU proportions, the objective is to maximize the spatiotemporal dependence, which is defined by blending both spatial and temporal dependences. Experiments on three data sets showed that the proposed approach can provide more accurate subpixel resolution TSIs than conventional SPM methods. The SPM results obtained from the TSIs provide an excellent opportunity for LCLU dynamic monitoring and change detection at a finer spatial resolution than the available coarse spatial resolution TSIs

    Fast and Slow Changes Constrained Spatio-temporal Subpixel Mapping

    Get PDF
    Subpixel mapping (SPM) is a technique to tackle the mixed pixel problem and produce land cover and land use (LCLU) maps at a finer spatial resolution than the original coarse data. However, uncertainty exists unavoidably in SPM, which is an ill-posed downscaling problem. Spatio-temporal SPM methods have been proposed to deal with this uncertainty, but current methods fail to explore fully the information in the time-series images, especially more rapid changes over a short-time interval. In this paper, a fast and slow changes constrained spatio-temporal subpixel mapping (FSSTSPM) method is proposed to account for fast LCLU changes over a short-time interval and slow changes over a long-time interval. Namely, both fast and slow change-based temporal constraints are proposed and incorporated simultaneously into the FSSTSPM to increase the accuracy of SPM. The proposed FSSTSPM method was validated using two synthetic datasets with various proportion errors. It was also applied to oil-spill mapping using a real PlanetScope-Sentinel-2 dataset and Amazon deforestation mapping using a real Landsat-MODIS dataset. The results demonstrate the superiority of FSSTSPM. Moreover, the advantage of FSSTSPM is more obvious with an increase in proportion errors. The concepts of the fast and slow changes, together with the derived temporal constraints, provide a new insight to enhance SPM by taking fuller advantage of the temporal information in the available time-series images

    UrbanFM: Inferring Fine-Grained Urban Flows

    Full text link
    Urban flow monitoring systems play important roles in smart city efforts around the world. However, the ubiquitous deployment of monitoring devices, such as CCTVs, induces a long-lasting and enormous cost for maintenance and operation. This suggests the need for a technology that can reduce the number of deployed devices, while preventing the degeneration of data accuracy and granularity. In this paper, we aim to infer the real-time and fine-grained crowd flows throughout a city based on coarse-grained observations. This task is challenging due to two reasons: the spatial correlations between coarse- and fine-grained urban flows, and the complexities of external impacts. To tackle these issues, we develop a method entitled UrbanFM based on deep neural networks. Our model consists of two major parts: 1) an inference network to generate fine-grained flow distributions from coarse-grained inputs by using a feature extraction module and a novel distributional upsampling module; 2) a general fusion subnet to further boost the performance by considering the influences of different external factors. Extensive experiments on two real-world datasets, namely TaxiBJ and HappyValley, validate the effectiveness and efficiency of our method compared to seven baselines, demonstrating the state-of-the-art performance of our approach on the fine-grained urban flow inference problem

    Spatial-temporal super-resolution land cover mapping with a local spatial-temporal dependence model

    Get PDF
    The mixed pixel problem is common in remote sensing. A soft classification can generate land cover class fraction images that illustrate the areal proportions of the various land cover classes within pixels. The spatial distribution of land cover classes within each mixed pixel is, however, not represented. Super-resolution land cover mapping (SRM) is a technique to predict the spatial distribution of land cover classes within the mixed pixel using fraction images as input. Spatial-temporal SRM (STSRM) extends the basic SRM to include a temporal dimension by using a finer-spatial resolution land cover map that pre-or postdates the image acquisition time as ancillary data. Traditional STSRM methods often use one land cover map as the constraint, but neglect the majority of available land cover maps acquired at different dates and of the same scene in reconstructing a full state trajectory of land cover changes when applying STSRM to time series data. In addition, the STSRM methods define the temporal dependence globally, and neglect the spatial variation of land cover temporal dependence intensity within images. A novel local STSRM (LSTSRM) is proposed in this paper. LSTSRM incorporates more than one available land cover map to constrain the solution, and develops a local temporal dependence model, in which the temporal dependence intensity may vary spatially. The results show that LSTSRM can eliminate speckle-like artifacts and reconstruct the spatial patterns of land cover patches in the resulting maps, and increase the overall accuracy compared with other STSRM methods

    Optimizing hopfield neural network for super-resolution mapping

    Get PDF
    Remote sensing is a potential source of information of land covers on the surface of the Earth. Different types of remote sensing images offer different spatial resolution quality. High resolution images contain rich information, but they are expensive, while low resolution image are less detail but they are cheap. Super-resolution mapping (SRM) technique is used to enhance the spatial resolution of the low resolution image in order to produce land cover mapping with high accuracy. The mapping technique is crucial to differentiate land cover classes. Hopfield neural network (HNN) is a popular approach in SRM. Currently, numerical implementation of HNN uses ordinary differential equation (ODE) calculated with traditional Euler method. Although producing satisfactory accuracy, Euler method is considered slow especially when dealing with large data like remote sensing image. Therefore, in this paper several advanced numerical methods are applied to the formulation of the ODE in SRM in order to speed up the iterative procedure of SRM. These methods are an improved Euler, Runge-Kutta, and Adams-Moulton. Four classes of land covers such as vegetation, water bodies, roads, and buildings are used in this work. Results of traditional Euler produces mapping accuracy of 85.18% computed in 1000 iterations within 220-1020 seconds. Improved Euler method produces accuracy of 86.63% computed in a range of 60-620 iterations within 20-500 seconds. Runge-Kutta method produces accuracy of 86.63% computed in a range of 70-600 iterations within 20-400 seconds. Adams-Moulton method produces accuracy of 86.64% in a range of 40-320 iterations within 10-150 seconds

    Spatio-temporal sub-pixel land cover mapping of remote sensing imagery using spatial distribution information from same-class pixels

    Get PDF
    © 2020 by the authors. The generation of land cover maps with both fine spatial and temporal resolution would aid the monitoring of change on the Earth's surface. Spatio-temporal sub-pixel land cover mapping (STSPM) uses a few fine spatial resolution (FR) maps and a time series of coarse spatial resolution (CR) remote sensing images as input to generate FR land cover maps with a temporal frequency of the CR data set. Traditional STSPM selects spatially adjacent FR pixels within a local window as neighborhoods to model the land cover spatial dependence, which can be a source of error and uncertainty in the maps generated by the analysis. This paper proposes a new STSPM using FR remote sensing images that pre-and/or post-date the CR image as ancillary data to enhance the quality of the FR map outputs. Spectrally similar pixels within the locality of a target FR pixel in the ancillary data are likely to represent the same land cover class and hence such same-class pixels can provide spatial information to aid the analysis. Experimental results showed that the proposed STSPM predicted land cover maps more accurately than two comparative state-of-the-art STSPM algorithms

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    Mapping annual forest cover by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    Get PDF
    Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band Synthetic Aperture Radar (PALSAR) HH and HV polarization data were used previously to produce annual, global 25 m forest maps between 2007 and 2010, and the latest global forest maps of 2015 and 2016 were produced by using the ALOS-2 PALSAR-2 data. However, annual 25 m spatial resolution forest maps during 2011–2014 are missing because of the gap in operation between ALOS and ALOS-2, preventing the construction of a continuous, fine resolution time-series dataset on the world's forests. In contrast, the MODerate Resolution Imaging Spectroradiometer (MODIS) NDVI images were available globally since 2000. This research developed a novel method to produce annual 25 m forest maps during 2007–2016 by fusing the fine spatial resolution, but asynchronous PALSAR/PALSAR-2 with coarse spatial resolution, but synchronous MODIS NDVI data, thus, filling the four-year gap in the ALOS and ALOS-2 time-series, as well as enhancing the existing mapping activity. The method was developed concentrating on two key objectives: 1) producing more accurate 25 m forest maps by integrating PALSAR/PALSAR-2 and MODIS NDVI data during 2007–2010 and 2015–2016; 2) reconstructing annual 25 m forest maps from time-series MODIS NDVI images during 2011–2014. Specifically, a decision tree classification was developed for forest mapping based on both the PALSAR/PALSAR-2 and MODIS NDVI data, and a new spatial-temporal super-resolution mapping was proposed to reconstruct the 25 m forest maps from time-series MODIS NDVI images. Three study sites including Paraguay, the USA and Russia were chosen, as they represent the world's three main forest types: tropical forest, temperate broadleaf and mixed forest, and boreal conifer forest, respectively. Compared with traditional methods, the proposed approach produced the most accurate continuous time-series of fine spatial resolution forest maps both visually and quantitatively. For the forest maps during 2007–2010 and 2015–2016, the results had greater overall accuracy values (>98%) than those of the original JAXA forest product. For the reconstructed 25 m forest maps during 2011–2014, the increases in classifications accuracy relative to three benchmark methods were statistically significant, and the overall accuracy values of the three study sites were almost universally >92%. The proposed approach, therefore, has great potential to support the production of annual 25 m forest maps by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    SFSDAF: an enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion

    Get PDF
    Spatio-temporal image fusion methods have become a popular means to produce remotely sensed data sets that have both fine spatial and temporal resolution. Accurate prediction of reflectance change is difficult, especially when the change is caused by both phenological change and land cover class changes. Although several spatio-temporal fusion methods such as the Flexible Spatiotemporal DAta Fusion (FSDAF) directly derive land cover phenological change information (such as endmember change) at different dates, the direct derivation of land cover class change information is challenging. In this paper, an enhanced FSDAF that incorporates sub-pixel class fraction change information (SFSDAF) is proposed. By directly deriving the sub-pixel land cover class fraction change information the proposed method allows accurate prediction even for heterogeneous regions that undergo a land cover class change. In particular, SFSDAF directly derives fine spatial resolution endmember change and class fraction change at the date of the observed image pair and the date of prediction, which can help identify image reflectance change resulting from different sources. SFSDAF predicts a fine resolution image at the time of acquisition of coarse resolution images using only one prior coarse and fine resolution image pair, and accommodates variations in reflectance due to both natural fluctuations in class spectral response (e.g. due to phenology) and land cover class change. The method is illustrated using degraded and real images and compared against three established spatio-temporal methods. The results show that the SFSDAF produced the least blurred images and the most accurate predictions of fine resolution reflectance values, especially for regions of heterogeneous landscape and regions that undergo some land cover class change. Consequently, the SFSDAF has considerable potential in monitoring Earth surface dynamics

    Principles and methods of scaling geospatial Earth science data

    Get PDF
    The properties of geographical phenomena vary with changes in the scale of measurement. The information observed at one scale often cannot be directly used as information at another scale. Scaling addresses these changes in properties in relation to the scale of measurement, and plays an important role in Earth sciences by providing information at the scale of interest, which may be required for a range of applications, and may be useful for inferring geographical patterns and processes. This paper presents a review of geospatial scaling methods for Earth science data. Based on spatial properties, we propose a methodological framework for scaling addressing upscaling, downscaling and side-scaling. This framework combines scale-independent and scale-dependent properties of geographical variables. It allows treatment of the varying spatial heterogeneity of geographical phenomena, combines spatial autocorrelation and heterogeneity, addresses scale-independent and scale-dependent factors, explores changes in information, incorporates geospatial Earth surface processes and uncertainties, and identifies the optimal scale(s) of models. This study shows that the classification of scaling methods according to various heterogeneities has great potential utility as an underpinning conceptual basis for advances in many Earth science research domains. © 2019 Elsevier B.V
    corecore