15 research outputs found

    Impacts of Rapid Socioeconomic Development on Cropping Intensity Dynamics in China during 2001–2016

    No full text
    Changes in cropping intensity reflect not only changes in land use but also the transformation of land functions. Although both natural conditions and socioeconomic factors can influence the spatial distribution of the cropping intensity and its changes, socioeconomic developments related to human activities can exert great impacts on short term cropping intensity changes. The driving force of this change has a high level of uncertainty; and few researchers have implemented comprehensive studies on the underlying driving forces and mechanisms of these changes. This study produced cropping intensity maps in China from 2001 to 2016 using remote sensing data and analyzed the impacts of socioeconomic drivers on cropping intensity and its changes in nine major agricultural zones in China. We found that the average annual cropping intensity in all nine agricultural zones increased from 2001 to 2016 under rapid socioeconomic development, and the trends in the seven major agricultural zones were significantly increased (p < 0.05), based on a Mann–Kendall test, except for the Northeast China Plain (NE Plain) and Qinghai Tibet Plateau (QT Plateau). Based on the results from the Geo-Detector, a widely used geospatial analysis tool, the dominant factors that affected cropping intensity distribution were related to the arable land output in the plain regions and topography in the mountainous regions. The factors that affected cropping intensity changes were mainly related to the arable land area and crop yields in northern China, and regional economic developments, such as machinery power input and farmers’ income in southern China. These findings provide useful cropping intensity data and profound insights for policymaking on how to use cultivated land resources efficiently and sustainably

    One-Class Classification of Airborne LiDAR Data in Urban Areas Using a Presence and Background Learning Algorithm

    No full text
    Automatic classification of light detection and ranging (LiDAR) data in urban areas is of great importance for many applications such as generating three-dimensional (3D) building models and monitoring power lines. Traditional supervised classification methods require training samples of all classes to construct a reliable classifier. However, complete training samples are normally hard and costly to collect, and a common circumstance is that only training samples for a class of interest are available, in which traditional supervised classification methods may be inappropriate. In this study, we investigated the possibility of using a novel one-class classification algorithm, i.e., the presence and background learning (PBL) algorithm, to classify LiDAR data in an urban scenario. The results demonstrated that the PBL algorithm implemented by back propagation (BP) neural network (PBL-BP) could effectively classify a single class (e.g., building, tree, terrain, power line, and others) from airborne LiDAR point cloud with very high accuracy. The mean F-score for all of the classes from the PBL-BP classification results was 0.94, which was higher than those from one-class support vector machine (SVM), biased SVM, and maximum entropy methods (0.68, 0.82 and 0.93, respectively). Moreover, the PBL-BP algorithm yielded a comparable overall accuracy to the multi-class SVM method. Therefore, this method is very promising in the classification of the LiDAR point cloud

    Deep Learning Approaches for the Mapping of Tree Species Diversity in a Tropical Wetland Using Airborne LiDAR and High-Spatial-Resolution Remote Sensing Images

    No full text
    The monitoring of tree species diversity is important for forest or wetland ecosystem service maintenance or resource management. Remote sensing is an efficient alternative to traditional field work to map tree species diversity over large areas. Previous studies have used light detection and ranging (LiDAR) and imaging spectroscopy (hyperspectral or multispectral remote sensing) for species richness prediction. The recent development of very high spatial resolution (VHR) RGB images has enabled detailed characterization of canopies and forest structures. In this study, we developed a three-step workflow for mapping tree species diversity, the aim of which was to increase knowledge of tree species diversity assessment using deep learning in a tropical wetland (Haizhu Wetland) in South China based on VHR-RGB images and LiDAR points. Firstly, individual trees were detected based on a canopy height model (CHM, derived from LiDAR points) by the local-maxima-based method in the FUSION software (Version 3.70, Seattle, USA). Then, tree species at the individual tree level were identified via a patch-based image input method, which cropped the RGB images into small patches (the individually detected trees) based on the tree apexes detected. Three different deep learning methods (i.e., AlexNet, VGG16, and ResNet50) were modified to classify the tree species, as they can make good use of the spatial context information. Finally, four diversity indices, namely, the Margalef richness index, the Shannon–Wiener diversity index, the Simpson diversity index, and the Pielou evenness index, were calculated from the fixed subset with a size of 30 × 30 m for assessment. In the classification phase, VGG16 had the best performance, with an overall accuracy of 73.25% for 18 tree species. Based on the classification results, mapping of tree species diversity showed reasonable agreement with field survey data (R2Margalef = 0.4562, root-mean-square error RMSEMargalef = 0.5629; R2Shannon–Wiener = 0.7948, RMSEShannon–Wiener = 0.7202; R2Simpson = 0.7907, RMSESimpson = 0.1038; and R2Pielou = 0.5875, RMSEPielou = 0.3053). While challenges remain for individual tree detection and species classification, the deep-learning-based solution shows potential for mapping tree species diversity

    One-Class Classification of Airborne LiDAR Data in Urban Areas Using a Presence and Background Learning Algorithm

    No full text
    Automatic classification of light detection and ranging (LiDAR) data in urban areas is of great importance for many applications such as generating three-dimensional (3D) building models and monitoring power lines. Traditional supervised classification methods require training samples of all classes to construct a reliable classifier. However, complete training samples are normally hard and costly to collect, and a common circumstance is that only training samples for a class of interest are available, in which traditional supervised classification methods may be inappropriate. In this study, we investigated the possibility of using a novel one-class classification algorithm, i.e., the presence and background learning (PBL) algorithm, to classify LiDAR data in an urban scenario. The results demonstrated that the PBL algorithm implemented by back propagation (BP) neural network (PBL-BP) could effectively classify a single class (e.g., building, tree, terrain, power line, and others) from airborne LiDAR point cloud with very high accuracy. The mean F-score for all of the classes from the PBL-BP classification results was 0.94, which was higher than those from one-class support vector machine (SVM), biased SVM, and maximum entropy methods (0.68, 0.82 and 0.93, respectively). Moreover, the PBL-BP algorithm yielded a comparable overall accuracy to the multi-class SVM method. Therefore, this method is very promising in the classification of the LiDAR point cloud

    Extraction of Pig Farms From GaoFen Satellite Images Based on Deep Learning

    No full text
    Accurate information on the spatial distribution and area of pig farms is essential for pig breeding monitoring, pork production estimation, and environmental governance of pig breeding. Governmental regulatory departments mostly rely on field surveys to obtain pig farm information, and there are few studies that focus on the extraction of pig farm information using remote sensing data. As the buildings on pig farms are small-scale and have scattered distributions, pig farm identification using high-resolution data and deep learning algorithms is worth exploring. In this article, a method of identifying pig farms with a deep learning algorithm and multiple sources of GaoFen (GF) image data was proposed. The experiments were conducted with different combinations of multiple sources of GaoFen satellite images (GF-2, GF-5, and GF-7) to determine the suitability of these images for pig farm extraction. The results illustrated that the average overall accuracy of the pig farm identification was above 80% using all of the different combinations of GaoFen sourced images. The spatial detail information provided by the GF-2 satellite improved the pig farm identification accuracy more than did the spectral detail information provided by the hyperspectral data from the GF-5 satellite and the digital surface model from the GF-7 satellite. The deep learning algorithm performed well in identifying pig farms with a greater number of patches and a higher aggregation index, and had lower accuracy in extracting pig farms distribution with a high edge density and patch density

    Gap-Filling and Missing Information Recovery for Time Series of MODIS Data Using Deep Learning-Based Methods

    No full text
    Sensors onboard satellite platforms with short revisiting periods acquire frequent earth observation data. One limitation to the utility of satellite-based data is missing information in the time series of images due to cloud contamination and sensor malfunction. Most studies on gap-filling and cloud removal process individual images, and existing multi-temporal image restoration methods still have problems in dealing with images that have large areas with frequent cloud contamination. Considering these issues, we proposed a deep learning-based method named content-sequence-texture generation (CSTG) network to generate gap-filled time series of images. The method uses deep neural networks to restore remote sensing images with missing information by accounting for image contents, textures and temporal sequences. We designed a content generation network to preliminarily fill in the missing parts and a sequence-texture generation network to optimize the gap-filling outputs. We used time series of Moderate-resolution Imaging Spectroradiometer (MODIS) data in different regions, which include various surface characteristics in North America, Europe and Asia to train and test the proposed model. Compared to the reference images, the CSTG achieved structural similarity (SSIM) of 0.953 and mean absolute errors (MAE) of 0.016 on average for the restored time series of images in artificial experiments. The developed method could restore time series of images with detailed texture and generally performed better than the other comparative methods, especially with large or overlapped missing areas in time series. Our study provides an available method to gap-fill time series of remote sensing images and highlights the power of the deep learning methods in reconstructing remote sensing images

    Convolution-Transformer Adaptive Fusion Network for Hyperspectral Image Classification

    No full text
    Hyperspectral image (HSI) classification is an important but challenging topic in the field of remote sensing and earth observation. By coupling the advantages of convolutional neural network (CNN) and Transformer model, the CNN–Transformer hybrid model can extract local and global features simultaneously and has achieved outstanding performance in HSI classification. However, most of the existing CNN–Transformer hybrid models use artificially specified hybrid strategies, which have poor generalization ability and are difficult to meet the requirements of recognizing fine-grained objects in HSI of complex scenes. To overcome this problem, we proposed a convolution–Transformer adaptive fusion network (CTAFNet) for pixel-wise HSI classification. A local–global fusion feature extraction unit, called the convolution–Transformer adaptive fusion kernel, was designed and integrated into the CTAFNet. The kernel captures the local high-frequency features using a convolution module and extracts the global and sequential low-frequency information using a Transformer module. We developed an adaptive feature fusion strategy to fuse the local high-frequency and global low-frequency features to obtain a robust and discriminative representation of the HSI data. An encoder–decoder structure was adopted in the CTAFNet to improve the flow of fused local–global information between different stages, thus ensuring the generalization ability of the model. Experimental results conducted on three large-scale and challenging HSI datasets demonstrate that the proposed network is superior to nine state-of-the-art approaches. We highlighted the effectiveness of adaptive CNN–Transformer hybrid strategy in HSI classification

    Super-Resolution Image Reconstruction Method between Sentinel-2 and Gaofen-2 Based on Cascaded Generative Adversarial Networks

    No full text
    Due to the multi-scale and spectral features of remote sensing images compared to natural images, there are significant challenges in super-resolution reconstruction (SR) tasks. Networks trained on simulated data often exhibit poor reconstruction performance on real low-resolution (LR) images. Additionally, compared to natural images, remote sensing imagery involves fewer high-frequency components in network construction. To address the above issues, we introduce a new high–low-resolution dataset GF_Sen based on GaoFen-2 and Sentinel-2 images and propose a cascaded network CSWGAN combined with spatial–frequency features. Firstly, based on the proposed self-attention GAN (SGAN) and wavelet-based GAN (WGAN) in this study, the CSWGAN combines the strengths of both networks. It not only models long-range dependencies and better utilizes global feature information, but also extracts frequency content differences between different images, enhancing the learning of high-frequency information. Experiments have shown that the networks trained based on the GF_Sen can achieve better performance than those trained on simulated data. The reconstructed images from the CSWGAN demonstrate improvements in the PSNR and SSIM by 4.375 and 4.877, respectively, compared to the relatively optimal performance of the ESRGAN. The CSWGAN can reflect the reconstruction advantages of a high-frequency scene and provides a working foundation for fine-scale applications in remote sensing

    Gap-Filling and Missing Information Recovery for Time Series of MODIS Data Using Deep Learning-Based Methods

    No full text
    Sensors onboard satellite platforms with short revisiting periods acquire frequent earth observation data. One limitation to the utility of satellite-based data is missing information in the time series of images due to cloud contamination and sensor malfunction. Most studies on gap-filling and cloud removal process individual images, and existing multi-temporal image restoration methods still have problems in dealing with images that have large areas with frequent cloud contamination. Considering these issues, we proposed a deep learning-based method named content-sequence-texture generation (CSTG) network to generate gap-filled time series of images. The method uses deep neural networks to restore remote sensing images with missing information by accounting for image contents, textures and temporal sequences. We designed a content generation network to preliminarily fill in the missing parts and a sequence-texture generation network to optimize the gap-filling outputs. We used time series of Moderate-resolution Imaging Spectroradiometer (MODIS) data in different regions, which include various surface characteristics in North America, Europe and Asia to train and test the proposed model. Compared to the reference images, the CSTG achieved structural similarity (SSIM) of 0.953 and mean absolute errors (MAE) of 0.016 on average for the restored time series of images in artificial experiments. The developed method could restore time series of images with detailed texture and generally performed better than the other comparative methods, especially with large or overlapped missing areas in time series. Our study provides an available method to gap-fill time series of remote sensing images and highlights the power of the deep learning methods in reconstructing remote sensing images

    Convolution-Transformer Adaptive Fusion Network for Hyperspectral Image Classification

    No full text
    Hyperspectral image (HSI) classification is an important but challenging topic in the field of remote sensing and earth observation. By coupling the advantages of convolutional neural network (CNN) and Transformer model, the CNN–Transformer hybrid model can extract local and global features simultaneously and has achieved outstanding performance in HSI classification. However, most of the existing CNN–Transformer hybrid models use artificially specified hybrid strategies, which have poor generalization ability and are difficult to meet the requirements of recognizing fine-grained objects in HSI of complex scenes. To overcome this problem, we proposed a convolution–Transformer adaptive fusion network (CTAFNet) for pixel-wise HSI classification. A local–global fusion feature extraction unit, called the convolution–Transformer adaptive fusion kernel, was designed and integrated into the CTAFNet. The kernel captures the local high-frequency features using a convolution module and extracts the global and sequential low-frequency information using a Transformer module. We developed an adaptive feature fusion strategy to fuse the local high-frequency and global low-frequency features to obtain a robust and discriminative representation of the HSI data. An encoder–decoder structure was adopted in the CTAFNet to improve the flow of fused local–global information between different stages, thus ensuring the generalization ability of the model. Experimental results conducted on three large-scale and challenging HSI datasets demonstrate that the proposed network is superior to nine state-of-the-art approaches. We highlighted the effectiveness of adaptive CNN–Transformer hybrid strategy in HSI classification
    corecore