4,796 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    The 2010 MW 6.8 Yushu (Qinghai, China) earthquake: constraints provided by InSAR and body wave seismology

    Get PDF
    By combining observations from satellite radar, body wave seismology and optical imagery, we have determined the fault segmentation and sequence of ruptures for the 2010 Mw 6.8 Yushu (China) earthquake. We have mapped the fault trace using displacements from SAR image matching, interferometric phase and coherence, and 2.5 m SPOT-5 satellite images. Modeling the event as an elastic dislocation with three segments fitted to the fault trace suggests that the southeast and northwest segments are near vertical, with the central segment dipping 70° to the southwest; slip occurs mainly in the upper 10 km, with a maximum slip of 1.5 m at a depth of 4 km on the southeastern segment. The maximum slip in the top 1 km (i.e., near surface) is up to 1.2 m, and inferred locations of significant surface rupture are consistent with displacements from SAR image matching and field observations. The radar interferograms show rupture over a distance of almost 80 km, much larger than initial seismological and field estimates of the length of the fault. Part of this difference can be attributed to slip on the northwestern segment of the fault being due to an Mw 6.1 aftershock two hours after the main event. The remaining difference can be explained by a non-uniform slip distribution with much of the moment release occurring at depths of less than 10 km. The rupture on the central and southeastern segments of the fault in the main shock propagated at a speed of 2.5 km/s southeastward toward the town of Yushu located at the end of this segment, accounting for the considerable building damage. Strain accumulation since the last earthquake on the fault segment beyond Yushu is equivalent to an Mw 6.5 earthquake

    Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery

    Full text link
    The wide field of view (WFV) imaging system onboard the Chinese GaoFen-1 (GF-1) optical satellite has a 16-m resolution and four-day revisit cycle for large-scale Earth observation. The advantages of the high temporal-spatial resolution and the wide field of view make the GF-1 WFV imagery very popular. However, cloud cover is an inevitable problem in GF-1 WFV imagery, which influences its precise application. Accurate cloud and cloud shadow detection in GF-1 WFV imagery is quite difficult due to the fact that there are only three visible bands and one near-infrared band. In this paper, an automatic multi-feature combined (MFC) method is proposed for cloud and cloud shadow detection in GF-1 WFV imagery. The MFC algorithm first implements threshold segmentation based on the spectral features and mask refinement based on guided filtering to generate a preliminary cloud mask. The geometric features are then used in combination with the texture features to improve the cloud detection results and produce the final cloud mask. Finally, the cloud shadow mask can be acquired by means of the cloud and shadow matching and follow-up correction process. The method was validated using 108 globally distributed scenes. The results indicate that MFC performs well under most conditions, and the average overall accuracy of MFC cloud detection is as high as 96.8%. In the contrastive analysis with the official provided cloud fractions, MFC shows a significant improvement in cloud fraction estimation, and achieves a high accuracy for the cloud and cloud shadow detection in the GF-1 WFV imagery with fewer spectral bands. The proposed method could be used as a preprocessing step in the future to monitor land-cover change, and it could also be easily extended to other optical satellite imagery which has a similar spectral setting.Comment: This manuscript has been accepted for publication in Remote Sensing of Environment, vol. 191, pp.342-358, 2017. (http://www.sciencedirect.com/science/article/pii/S003442571730038X

    Coastal Aquaculture Extraction Using GF-3 Fully Polarimetric SAR Imagery: A Framework Integrating UNet++ with Marker-Controlled Watershed Segmentation

    Get PDF
    Coastal aquaculture monitoring is vital for sustainable offshore aquaculture management. However, the dense distribution and various sizes of aquacultures make it challenging to accurately extract the boundaries of aquaculture ponds. In this study, we develop a novel combined framework that integrates UNet++ with a marker-controlled watershed segmentation strategy to facilitate aquaculture boundary extraction from fully polarimetric GaoFen-3 SAR imagery. First, four polarimetric decomposition algorithms were applied to extract 13 polarimetric scattering features. Together with the nine other polarisation and texture features, a total of 22 polarimetric features were then extracted, among which four were optimised according to the separability index. Subsequently, to reduce the “adhesion” phenomenon and separate adjacent and even adhering ponds into individual aquaculture units, two UNet++ subnetworks were utilised to construct the marker and foreground functions, the results of which were then used in the marker-controlled watershed algorithm to obtain refined aquaculture results. A multiclass segmentation strategy that divides the intermediate markers into three categories (aquaculture, background and dikes) was applied to the marker function. In addition, a boundary patch refinement postprocessing strategy was applied to the two subnetworks to extract and repair the complex/error-prone boundaries of the aquaculture ponds, followed by a morphological operation that was conducted for label augmentation. An experimental investigation performed to extract individual aquacultures in the Yancheng Coastal Wetlands indicated that the crucial features for aquacultures are Shannon entropy (SE), the intensity component of SE (SE_I) and the corresponding mean texture features (Mean_SE and Mean_SE_I). When the optimal features were introduced, our proposed method performed better than standard UNet++ in aquaculture extraction, achieving improvements of 1.8%, 3.2%, 21.7% and 12.1% in F1, IoU, MR and insF1, respectively. The experimental results indicate that the proposed method can handle the adhesion of both adjacent objects and unclear boundaries effectively and capture clear and refined aquaculture boundaries

    Resolving Fine-Scale Surface Features on Polar Sea Ice: A First Assessment of UAS Photogrammetry Without Ground Control

    Get PDF
    Mapping landfast sea ice at a fine spatial scale is not only meaningful for geophysical study, but is also of benefit for providing information about human activities upon it. The combination of unmanned aerial systems (UAS) with structure from motion (SfM) methods have already revolutionized the current close-range Earth observation paradigm. To test their feasibility in characterizing the properties and dynamics of fast ice, three flights were carried out in the 2016–2017 austral summer during the 33rd Chinese National Antarctic Expedition (CHINARE), focusing on the area of the Prydz Bay in East Antarctica. Three-dimensional models and orthomosaics from three sorties were constructed from a total of 205 photos using Agisoft PhotoScan software. Logistical challenges presented by the terrain precluded the deployment of a dedicated ground control network; however, it was still possible to indirectly assess the performance of the photogrammetric products through an analysis of the statistics of the matching network, bundle adjustment, and Monte-Carlo simulation. Our results show that the matching networks are quite strong, given a sufficient number of feature points (mostly > 20,000) or valid matches (mostly > 1000). The largest contribution to the total error using our direct georeferencing approach is attributed to inaccuracies in the onboard position and orientation system (POS) records, especially in the vehicle height and yaw angle. On one hand, the 3D precision map reveals that planimetric precision is usually about one-third of the vertical estimate (typically 20 cm in the network centre). On the other hand, shape-only errors account for less than 5% for the X and Y dimensions and 20% for the Z dimension. To further illustrate the UAS’s capability, six representative surface features are selected and interpreted by sea ice experts. Finally, we offer pragmatic suggestions and guidelines for planning future UAS-SfM surveys without the use of ground control. The work represents a pioneering attempt to comprehensively assess UAS-SfM survey capability in fast ice environments, and could serve as a reference for future improvements

    Object-Based Greenhouse Mapping Using Very High Resolution Satellite Data and Landsat 8 Time Series

    Get PDF
    Greenhouse mapping through remote sensing has received extensive attention over the last decades. In this article, the innovative goal relies on mapping greenhouses through the combined use of very high resolution satellite data (WorldView-2) and Landsat 8 Operational Land Imager (OLI) time series within a context of an object-based image analysis (OBIA) and decision tree classification. Thus, WorldView-2 was mainly used to segment the study area focusing on individual greenhouses. Basic spectral information, spectral and vegetation indices, textural features, seasonal statistics and a spectral metric (Moment Distance Index, MDI) derived from Landsat 8 time series and/or WorldView-2 imagery were computed on previously segmented image objects. In order to test its temporal stability, the same approach was applied for two different years, 2014 and 2015. In both years, MDI was pointed out as the most important feature to detect greenhouses. Moreover, the threshold value of this spectral metric turned to be extremely stable for both Landsat 8 and WorldView-2 imagery. A simple decision tree always using the same threshold values for features from Landsat 8 time series and WorldView-2 was finally proposed. Overall accuracies of 93.0% and 93.3% and kappa coefficients of 0.856 and 0.861 were attained for 2014 and 2015 datasets, respectively

    Artificial Neural Networks and Evolutionary Computation in Remote Sensing

    Get PDF
    Artificial neural networks (ANNs) and evolutionary computation methods have been successfully applied in remote sensing applications since they offer unique advantages for the analysis of remotely-sensed images. ANNs are effective in finding underlying relationships and structures within multidimensional datasets. Thanks to new sensors, we have images with more spectral bands at higher spatial resolutions, which clearly recall big data problems. For this purpose, evolutionary algorithms become the best solution for analysis. This book includes eleven high-quality papers, selected after a careful reviewing process, addressing current remote sensing problems. In the chapters of the book, superstructural optimization was suggested for the optimal design of feedforward neural networks, CNN networks were deployed for a nanosatellite payload to select images eligible for transmission to ground, a new weight feature value convolutional neural network (WFCNN) was applied for fine remote sensing image segmentation and extracting improved land-use information, mask regional-convolutional neural networks (Mask R-CNN) was employed for extracting valley fill faces, state-of-the-art convolutional neural network (CNN)-based object detection models were applied to automatically detect airplanes and ships in VHR satellite images, a coarse-to-fine detection strategy was employed to detect ships at different sizes, and a deep quadruplet network (DQN) was proposed for hyperspectral image classification

    Combining Multiple Algorithms for Road Network Tracking from Multiple Source Remotely Sensed Imagery: a Practical System and Performance Evaluation

    Get PDF
    In light of the increasing availability of commercial high-resolution imaging sensors, automatic interpretation tools are needed to extract road features. Currently, many approaches for road extraction are available, but it is acknowledged that there is no single method that would be successful in extracting all types of roads from any remotely sensed imagery. In this paper, a novel classification of roads is proposed, based on both the roads' geometrical, radiometric properties and the characteristics of the sensors. Subsequently, a general road tracking framework is proposed, and one or more suitable road trackers are designed or combined for each type of roads. Extensive experiments are performed to extract roads from aerial/satellite imagery, and the results show that a combination strategy can automatically extract more than 60% of the total roads from very high resolution imagery such as QuickBird and DMC images, with a time-saving of approximately 20%, and acceptable spatial accuracy. It is proven that a combination of multiple algorithms is more reliable, more efficient and more robust for extracting road networks from multiple-source remotely sensed imagery than the individual algorithms
    corecore