50 research outputs found

    Mapping Crop Cycles in China Using MODIS-EVI Time Series

    Get PDF
    As the Earth’s population continues to grow and demand for food increases, the need for improved and timely information related to the properties and dynamics of global agricultural systems is becoming increasingly important. Global land cover maps derived from satellite data provide indispensable information regarding the geographic distribution and areal extent of global croplands. However, land use information, such as cropping intensity (defined here as the number of cropping cycles per year), is not routinely available over large areas because mapping this information from remote sensing is challenging. In this study, we present a simple but efficient algorithm for automated mapping of cropping intensity based on data from NASA’s (NASA: The National Aeronautics and Space Administration) MODerate Resolution Imaging Spectroradiometer (MODIS). The proposed algorithm first applies an adaptive Savitzky-Golay filter to smooth Enhanced Vegetation Index (EVI) time series derived from MODIS surface reflectance data. It then uses an iterative moving-window methodology to identify cropping cycles from the smoothed EVI time series. Comparison of results from our algorithm with national survey data at both the provincial and prefectural level in China show that the algorithm provides estimates of gross sown area that agree well with inventory data. Accuracy assessment comparing visually interpreted time series with algorithm results for a random sample of agricultural areas in China indicates an overall accuracy of 91.0% for three classes defined based on the number of cycles observed in EVI time series. The algorithm therefore appears to provide a straightforward and efficient method for mapping cropping intensity from MODIS time series data

    View angle effects on MODIS snow mapping in forests

    Get PDF
    Binary snow maps and fractional snow cover data are provided routinely from MODIS (Moderate Resolution Imaging Spectroradiometer). This paper investigates how the wide observation angles of MODIS influence the current snow mapping algorithm in forested areas. Theoretical modeling results indicate that large view zenith angles (VZA) can lead to underestimation of fractional snow cover (FSC) by reducing the amount of the ground surface that is viewable through forest canopies, and by increasing uncertainties during the gridding of MODIS data. At the end of the MODIS scan line, the total modeled error can be as much as 50% for FSC. Empirical analysis of MODIS/Terra snow products in four forest sites shows high fluctuation in FSC estimates on consecutive days. In addition, the normalized difference snow index (NDSI) values, which are the primary input to the MODIS snow mapping algorithms, decrease as VZA increases at the site level. At the pixel level, NDSI values have higher variances, and are correlated with the normalized difference vegetation index (NDVI) in snow covered forests. These findings are consistent with our modeled results, and imply that consideration of view angle effects could improve MODIS snow monitoring in forested areas

    Fusion of MODIS and Landsat data to allow near real-time monitoring of land surface change

    Full text link
    Thesis (Ph.D.)--Boston UniversityPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at [email protected]. Thank you.A new methodology for fusion of MODIS and Landsat data improves monitoring of land surface change and snow mapping. This fusion method is based on prediction of MODIS data using a time-series of Landsat data. An underlying hypothesis is that the predicted MODIS images will form a more stable basis for comparison with new MODIS images than previous MODIS images. Correlations between predicted and observed MODIS images are higher than for successive days of MODIS data, confirming our hypothesis. Differences in the spectral signatures between predicted and real MODIS images become the "signal" used detect land surface change. Tests of the fusion method to detect forest clearing show producer's and user's accuracies of 86% and 85%, respectively. Cleared patches of forest as small as 5-6 ha in size can be detected, a considerable improvement over current published results. Additionally, the fusion method can be used to map snow cover on a daily basis and is more accurate than current operational MODIS snow products. The encouraging results indicate that the fusion method holds promise for improving monitoring of land surface change in near real-time.2031-01-0

    Characterizing Spring Phenological Changes of the Land Surface across the Conterminous United States from 2001 to 2021

    No full text
    Monitoring land surface phenology plays a fundamental role in quantifying the impact of climate change on terrestrial ecosystems. Shifts in land surface spring phenology have become a hot spot in the field of global climate change research. While numerous studies have used satellite data to capture the interannual variation of the start of the growing season (SOS), the understanding of spatiotemporal performances of SOS needs to be enhanced. In this study, we retrieved the annual SOS from the Moderate Resolution Imaging Spectroradiometer (MODIS) two-band enhanced vegetation index (EVI2) time series in the conterminous United States from 2001 to 2021, and explored the spatial and temporal patterns of SOS and its trend characteristics in different land cover types. The performance of the satellite-derived SOS was evaluated using the USA National Phenology Network (USA-NPN) and Harvard Forest data. The results revealed that SOS exhibited a significantly delayed trend of 1.537 days/degree (p < 0.01) with increasing latitude. The timing of the satellite-derived SOS was significantly and positively correlated with the in-situ data. Despite the fact that the overall trends were not significant from 2001 to 2021, the SOS and its interannual variability exhibited a wide range of variation across land cover types. The earliest SOS occurred in urban and built-up land areas, while the latest occurred in cropland areas. In addition, mixed trends in SOS were observed in sporadic areas of different land cover types. Our results found that (1) warming hiatus slows the advance of land surface spring phenology across the conterminous United States under climate change, and (2) large-scale land surface spring phenology trends extraction should consider the potential effects of different land cover types. To improve our understanding of climate change, we need to continuously monitor and analyze the dynamics of the land surface spring phenology

    Characterizing Spring Phenological Changes of the Land Surface across the Conterminous United States from 2001 to 2021

    No full text
    Monitoring land surface phenology plays a fundamental role in quantifying the impact of climate change on terrestrial ecosystems. Shifts in land surface spring phenology have become a hot spot in the field of global climate change research. While numerous studies have used satellite data to capture the interannual variation of the start of the growing season (SOS), the understanding of spatiotemporal performances of SOS needs to be enhanced. In this study, we retrieved the annual SOS from the Moderate Resolution Imaging Spectroradiometer (MODIS) two-band enhanced vegetation index (EVI2) time series in the conterminous United States from 2001 to 2021, and explored the spatial and temporal patterns of SOS and its trend characteristics in different land cover types. The performance of the satellite-derived SOS was evaluated using the USA National Phenology Network (USA-NPN) and Harvard Forest data. The results revealed that SOS exhibited a significantly delayed trend of 1.537 days/degree (p &lt; 0.01) with increasing latitude. The timing of the satellite-derived SOS was significantly and positively correlated with the in-situ data. Despite the fact that the overall trends were not significant from 2001 to 2021, the SOS and its interannual variability exhibited a wide range of variation across land cover types. The earliest SOS occurred in urban and built-up land areas, while the latest occurred in cropland areas. In addition, mixed trends in SOS were observed in sporadic areas of different land cover types. Our results found that (1) warming hiatus slows the advance of land surface spring phenology across the conterminous United States under climate change, and (2) large-scale land surface spring phenology trends extraction should consider the potential effects of different land cover types. To improve our understanding of climate change, we need to continuously monitor and analyze the dynamics of the land surface spring phenology

    Multi-scale evaluation of light use efficiency in MODIS gross primary productivity for croplands in the Midwestern United States

    Get PDF
    Satellite remote sensing provides continuous observations of land surfaces, thereby offering opportunities for large-scale monitoring of terrestrial productivity. Production Efficiency Models (PEMs) have been widely used in satellite-based studies to simulate carbon exchanges between the atmosphere and ecosystems. However, model parameterization of the maximum light use efficiency (ε*GPP) varies considerably for croplands in agricultural studies at different scales. In this study, we evaluate cropland ε*GPP in the MODIS Gross Primary Productivity (GPP) model (MOD17) using in situ measurements and inventory datasets across the Midwestern US. The site-scale calibration using 28 site-years tower measurements derives ε*GPP values of 2.78 ± 0.48 gC MJ−1(± standard deviation) for corn and 1.64 ± 0.23 gC MJ−1for soybean. The calibrated models could account for approximately 60–80% of the variances of tower-based GPP. The regional-scale study using 4-year agricultural inventory data suggests comparable ε*GPP values of 2.48 ± 0.65 gC MJ−1 for corn and 1.18 ± 0.29 gC MJ−1 for soybean. Annual GPP derived from inventory data (1848.4 ± 298.1 gC m−2y−1 for corn and 908.9 ± 166.3 gC m−2y−1 for soybean) are consistent with modeled GPP (1887.8 ± 229.8 gC m−2y−1 for corn and 849.1 ± 122.2 gC m−2y−1 for soybean). Our results are in line with recent studies and imply that cropland GPP is largely underestimated in the MODIS GPP products for the Midwestern US. Our findings indicate that model parameters (primarily ε*GPP) should be carefully recalibrated for regional studies and field-derived ε*GPP can be consistently applied to large-scale modeling as we did here for the Midwestern US

    Satellite-Based Models Need Improvements to Simulating Annual Gross Primary Productivity: A Comparison of Six Models for Regional Modeling of Deciduous Broadleaf Forests

    No full text
    Modeling vegetation gross primary productivity (GPP) is crucial to understanding the land&ndash;atmosphere interactions and, hence, the global carbon cycle. While studies have demonstrated that satellite-based models could well simulate intra-annual variation of vegetation GPP, there is a need to understand our ability to capture interannual GPP variability. This study compares the spatiotemporal performance of six satellite-based models in regional modeling of annual GPP for deciduous broadleaf forests across the eastern United States. The 2001&ndash;2012 average annual gross primary productivities (AAGPPs) derived from different models have mismatched spatial patterns with divergent changing trends along both latitude and longitude. Evaluation using flux tower data indicates that some models could have considerable biases on a yearly basis. All tested models, despite performing well on the 8-day basis because of the underlying strong seasonality in vegetation productivity, fail to capture interannual variation of GPP across sites and years. Our study identifies considerable modeling uncertainties on a yearly basis even for an extensively studied biome of deciduous broadleaf forest at both site and large scales. Improvements to the current satellite-based models have to be made to capture interannual GPP variation in addition to intra-annual variation

    Identifying Leaf Phenology of Deciduous Broadleaf Forests from PhenoCam Images Using a Convolutional Neural Network Regression Method

    No full text
    Vegetation phenology plays a key role in influencing ecosystem processes and biosphere-atmosphere feedbacks. Digital cameras such as PhenoCam that monitor vegetation canopies in near real-time provide continuous images that record phenological and environmental changes. There is a need to develop methods for automated and effective detection of vegetation dynamics from PhenoCam images. Here we developed a method to predict leaf phenology of deciduous broadleaf forests from individual PhenoCam images using deep learning approaches. We tested four convolutional neural network regression (CNNR) networks on their ability to predict vegetation growing dates based on PhenoCam images at 56 sites in North America. In the one-site experiment, the predicted phenology dated to after the leaf-out events agree well with the observed data, with a coefficient of determination (R2) of nearly 0.999, a root mean square error (RMSE) of up to 3.7 days, and a mean absolute error (MAE) of up to 2.1 days. The method developed achieved lower accuracies in the all-site experiment than in the one-site experiment, and the achieved R2 was 0.843, RMSE was 25.2 days, and MAE was 9.3 days in the all-site experiment. The model accuracy increased when the deep networks used the region of interest images rather than the entire images as inputs. Compared to the existing methods that rely on time series of PhenoCam images for studying leaf phenology, we found that the deep learning method is a feasible solution to identify leaf phenology of deciduous broadleaf forests from individual PhenoCam images

    Extracting Building Boundaries from High Resolution Optical Images and LiDAR Data by Integrating the Convolutional Neural Network and the Active Contour Model

    No full text
    Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 &plusmn; 3.34% (95.68 &plusmn; 3.22%), 88.60 &plusmn; 3.99% (89.06 &plusmn; 3.96%), and 91.62 &plusmn;1.61% (91.47 &plusmn; 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data

    Attention-Guided Label Refinement Network for Semantic Segmentation of Very High Resolution Aerial Orthoimages

    No full text
    The recent applications of fully convolutional networks (FCNs) have shown to improve the semantic segmentation of very high resolution (VHR) remote-sensing images because of the excellent feature representation and end-to-end pixel labeling capabilities. While many FCN-based methods concatenate features from multilevel encoding stages to refine the coarse labeling results, the semantic gap between features of different levels and the selection of representative features are often overlooked, leading to the generation of redundant information and unexpected classification results. In this article, we propose an attention-guided label refinement network (ALRNet) for improved semantic labeling of VHR images. ALRNet follows the paradigm of the encoder-decoder architecture, which progressively refines the coarse labeling maps of different scales by using the channelwise attention mechanism. A novel attention-guided feature fusion module based on the squeeze-and-excitation module is designed to fuse higher level and lower level features. In this way, the semantic gaps among features of different levels are declined, and the category discrimination of each pixel in the lower level features is strengthened, which is helpful for subsequent label refinement. ALRNet is tested on three public datasets, including two ISRPS 2-D labeling datasets and the Wuhan University aerial building dataset. Results demonstrated that ALRNet had shown promising segmentation performance in comparison with state-of-the-art deep learning networks. The source code of ALRNet is made publicly available for further studies
    corecore