166 research outputs found

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    Change detection of isolated housing using a new hybrid approach based on object classification with optical and TerraSAR-X data

    Full text link
    Optical and microwave high spatial resolution images are now available for a wide range of applications. In this work, they have been applied for the semi-automatic change detection of isolated housing in agricultural areas. This article presents a new hybrid methodology based on segmentation of high-resolution images and image differencing. This new approach mixes the main techniques used in change detection methods and it also adds a final segmentation process in order to classify the change detection product. First, isolated building classification is carried out using only optical data. Then, synthetic aperture radar (SAR) information is added to the classification process, obtaining excellent results with lower complexity cost. Since the first classification step is improved, the total change detection scheme is also enhanced when the radar data are used for classification. Finally, a comparison between the different methods is presented and some conclusions are extracted from the study. © 2011 Taylor & Francis.Vidal Pantaleoni, A.; Moreno Cambroreno, MDR. (2011). Change detection of isolated housing using a new hybrid approach based on object classification with optical and TerraSAR-X data. International Journal of Remote Sensing. 32(24):9621-9635. doi:10.1080/01431161.2011.571297S962196353224BLAES, X., VANHALLE, L., & DEFOURNY, P. (2005). Efficiency of crop identification based on optical and SAR image time series. Remote Sensing of Environment, 96(3-4), 352-365. doi:10.1016/j.rse.2005.03.010Chen, Y., Shi, P., Fung, T., Wang, J., & Li, X. (2007). Object‐oriented classification for urban land cover mapping with ASTER imagery. International Journal of Remote Sensing, 28(20), 4645-4651. doi:10.1080/01431160500444731Dalla Mura, M., Benediktsson, J. A., Bovolo, F., & Bruzzone, L. (2008). An Unsupervised Technique Based on Morphological Filters for Change Detection in Very High Resolution Images. IEEE Geoscience and Remote Sensing Letters, 5(3), 433-437. doi:10.1109/lgrs.2008.917726Dell’Acqua, F., & Gamba, P. (2006). Discriminating urban environments using multiscale texture and multiple SAR images. International Journal of Remote Sensing, 27(18), 3797-3812. doi:10.1080/01431160600557572Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6), 610-621. doi:10.1109/tsmc.1973.4309314Im, J., Jensen, J. R., & Tullis, J. A. (2008). Object‐based change detection using correlation image analysis and image segmentation. International Journal of Remote Sensing, 29(2), 399-423. doi:10.1080/01431160601075582Lhomme, S., He, D., Weber, C., & Morin, D. (2009). A new approach to building identification from very‐high‐spatial‐resolution images. International Journal of Remote Sensing, 30(5), 1341-1354. doi:10.1080/01431160802509017LOBO, A., CHIC, O., & CASTERAD, A. (1996). Classification of Mediterranean crops with multisensor data: per-pixel versus per-object statistics and image segmentation. International Journal of Remote Sensing, 17(12), 2385-2400. doi:10.1080/01431169608948779Lu, D., Mausel, P., Brondízio, E., & Moran, E. (2004). Change detection techniques. International Journal of Remote Sensing, 25(12), 2365-2401. doi:10.1080/0143116031000139863Shimabukuro, Y. E., Almeida‐Filho, R., Kuplich, T. M., & de Freitas, R. M. (2007). Quantifying optical and SAR image relationships for tropical landscape features in the Amazônia. International Journal of Remote Sensing, 28(17), 3831-3840. doi:10.1080/01431160701236829Stramondo, S., Bignami, C., Chini, M., Pierdicca, N., & Tertulliani, A. (2006). Satellite radar and optical remote sensing for earthquake damage detection: results from different case studies. International Journal of Remote Sensing, 27(20), 4433-4447. doi:10.1080/01431160600675895Yuan, D., & Elvidge, C. D. (1996). Comparison of relative radiometric normalization techniques. ISPRS Journal of Photogrammetry and Remote Sensing, 51(3), 117-126. doi:10.1016/0924-2716(96)00018-

    Comparative model for classification of forest degradation

    Get PDF
    The challenges of forest degradation together with its related effects have attracted research from diverse disciplines, resulting in different definitions of the concept. However, according to a number of researchers, the central element of this issue is human intrusion that destroys the state of the environment. Therefore, the focus of this research is to develop a comparative model using a large amount of multi-spectral remote sensing data, such as IKONOS, QUICKBIRD, SPOT, WORLDVIEW-1, Terra-SARX, and fused data to detect forest degradation in Cameron Highlands. The output of this method in line with the performance measurement model. In order to identify the best data, fused data and technique to be employed. Eleven techniques have been used to develop a Comparative technique by applying them on fifteen sets of data. The output of the Comparative technique was used to feed the performance measurement model in order to enhance the accuracy of each classification technique. Moreover, a Performance Measurement Model has been used to verify the results of the Comparative technique; and, these outputs have been validated using the reflectance library. In addition, the conceptual hybrid model proposed in this research will give the opportunity for researchers to establish a fully automatic intelligent model for future work. The results of this research have demonstrated the Neural Network (NN) to be the best Intelligent Technique (IT) with a 0.912 of the Kappa coefficient and 96% of the overall accuracy, Mahalanobis had a 0.795 of the Kappa coefficient and 88% of the overall accuracy and the Maximum likelihood (ML) had a 0.598 of the Kappa coefficient and 72% of the overall accuracy from the best fused image used in this research, which was represented by fusing the IKONOS image with the QUICKBIRD image as finally employed in the Comparative technique for improving the detectability of forest change

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    TerraSAR-X and Wetlands: A Review

    Get PDF
    Since its launch in 2007, TerraSAR-X observations have been widely used in a broad range of scientific applications. Particularly in wetland research, TerraSAR-X\u27s shortwave X-band synthetic aperture radar (SAR) possesses unique capabilities, such as high spatial and temporal resolution, for delineating and characterizing the inherent spatially and temporally complex and heterogeneous structure of wetland ecosystems and their dynamics. As transitional areas, wetlands comprise characteristics of both terrestrial and aquatic features, forming a large diversity of wetland types. This study reviews all published articles incorporating TerraSAR-X information into wetland research to provide a comprehensive study of how this sensor has been used with regard to polarization, and the function of the data, time-series analyses, or the assessment of specific wetland ecosystem types. What is evident throughout this literature review is the synergistic fusion of multi-frequency and multi-polarization SAR sensors, sometimes optical sensors, in almost all investigated studies to attain improved wetland classification results. Due to the short revisiting time of the TerraSAR-X sensor, it is possible to compute dense SAR time-series, allowing for a more precise observation of the seasonality in dynamic wetland areas as demonstrated in many of the reviewed studies

    Monitoring wetlands and water bodies in semi-arid Sub-Saharan regions

    Get PDF
    Surface water in wetlands is a critical resource in semi-arid West-African regions that are frequently exposed to droughts. Wetlands are of utmost importance for the population as well as the environment, and are subject to rapidly changing seasonal fluctuations. Dynamics of wetlands in the study area are still poorly understood, and the potential of remote sensing-derived information as a large-scale, multi-temporal, comparable and independent measurement source is not exploited. This work shows successful wetland monitoring with remote sensing in savannah and Sahel regions in Burkina Faso, focusing on the main study site Lac Bam (Lake Bam). Long-term optical time series from MODIS with medium spatial resolution (MR), and short-term synthetic aperture radar (SAR) time series from TerraSAR-X and RADARSAT-2 with high spatial resolution (HR) successfully demonstrate the classification and dynamic monitoring of relevant wetland features, e.g. open water, flooded vegetation and irrigated cultivation. Methodological highlights are time series analysis, e.g. spatio-temporal dynamics or multitemporal-classification, as well as polarimetric SAR (polSAR) processing, i.e. the Kennaugh elements, enabling physical interpretation of SAR scattering mechanisms for dual-polarized data. A multi-sensor and multi-frequency SAR data combination provides added value, and reveals that dual-co-pol SAR data is most recommended for monitoring wetlands of this type. The interpretation of environmental or man-made processes such as water areas spreading out further but retreating or evaporating faster, co-occurrence of droughts with surface water and vegetation anomalies, expansion of irrigated agriculture or new dam building, can be detected with MR optical and HR SAR time series. To capture long-term impacts of water extraction, sedimentation and climate change on wetlands, remote sensing solutions are available, and would have great potential to contribute to water management in Africa

    Learning a Joint Embedding of Multiple Satellite Sensors: A Case Study for Lake Ice Monitoring

    Full text link
    Fusing satellite imagery acquired with different sensors has been a long-standing challenge of Earth observation, particularly across different modalities such as optical and Synthetic Aperture Radar (SAR) images. Here, we explore the joint analysis of imagery from different sensors in the light of representation learning: we propose to learn a joint embedding of multiple satellite sensors within a deep neural network. Our application problem is the monitoring of lake ice on Alpine lakes. To reach the temporal resolution requirement of the Swiss Global Climate Observing System (GCOS) office, we combine three image sources: Sentinel-1 SAR (S1-SAR), Terra MODIS, and Suomi-NPP VIIRS. The large gaps between the optical and SAR domains and between the sensor resolutions make this a challenging instance of the sensor fusion problem. Our approach can be classified as a late fusion that is learned in a data-driven manner. The proposed network architecture has separate encoding branches for each image sensor, which feed into a single latent embedding. I.e., a common feature representation shared by all inputs, such that subsequent processing steps deliver comparable output irrespective of which sort of input image was used. By fusing satellite data, we map lake ice at a temporal resolution of < 1.5 days. The network produces spatially explicit lake ice maps with pixel-wise accuracies > 91% (respectively, mIoU scores > 60%) and generalises well across different lakes and winters. Moreover, it sets a new state-of-the-art for determining the important ice-on and ice-off dates for the target lakes, in many cases meeting the GCOS requirement
    corecore