36 research outputs found

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    Can I Trust My One-Class Classification?

    Get PDF
    Contrary to binary and multi-class classifiers, the purpose of a one-class classifier for remote sensing applications is to map only one specific land use/land cover class of interest. Training these classifiers exclusively requires reference data for the class of interest, while training data for other classes is not required. Thus, the acquisition of reference data can be significantly reduced. However, one-class classification is fraught with uncertainty and full automatization is difficult, due to the limited reference information that is available for classifier training. Thus, a user-oriented one-class classification strategy is proposed, which is based among others on the visualization and interpretation of the one-class classifier outcomes during the data processing. Careful interpretation of the diagnostic plots fosters the understanding of the classification outcome, e.g., the class separability and suitability of a particular threshold. In the absence of complete and representative validation data, which is the fact in the context of a real one-class classification application, such information is valuable for evaluation and improving the classification. The potential of the proposed strategy is demonstrated by classifying different crop types with hyperspectral data from Hyperion

    Towards Daily High-resolution Inundation Observations using Deep Learning and EO

    Full text link
    Satellite remote sensing presents a cost-effective solution for synoptic flood monitoring, and satellite-derived flood maps provide a computationally efficient alternative to numerical flood inundation models traditionally used. While satellites do offer timely inundation information when they happen to cover an ongoing flood event, they are limited by their spatiotemporal resolution in terms of their ability to dynamically monitor flood evolution at various scales. Constantly improving access to new satellite data sources as well as big data processing capabilities has unlocked an unprecedented number of possibilities in terms of data-driven solutions to this problem. Specifically, the fusion of data from satellites, such as the Copernicus Sentinels, which have high spatial and low temporal resolution, with data from NASA SMAP and GPM missions, which have low spatial but high temporal resolutions could yield high-resolution flood inundation at a daily scale. Here a Convolutional-Neural-Network is trained using flood inundation maps derived from Sentinel-1 Synthetic Aperture Radar and various hydrological, topographical, and land-use based predictors for the first time, to predict high-resolution probabilistic maps of flood inundation. The performance of UNet and SegNet model architectures for this task is evaluated, using flood masks derived from Sentinel-1 and Sentinel-2, separately with 95 percent-confidence intervals. The Area under the Curve (AUC) of the Precision Recall Curve (PR-AUC) is used as the main evaluation metric, due to the inherently imbalanced nature of classes in a binary flood mapping problem, with the best model delivering a PR-AUC of 0.85

    Evaluation of Multi-Frequency SAR Images for Tropical Land Cover Mapping

    Get PDF
    Earth Observation (EO) data plays a major role in supporting surveying compliance of several multilateral environmental treaties, such as UN-REDD+ (United Nations Reducing Emissions from Deforestation and Degradation). In this context, land cover maps of remote sensing data are the most commonly used EO products and development of adequate classification strategies is an ongoing research topic. However, the availability of meaningful multispectral data sets can be limited due to cloud cover, particularly in the tropics. In such regions, the use of SAR systems (Synthetic Aperture Radar), which are nearly independent form weather conditions, is particularly promising. With an ever-growing number of SAR satellites, as well as the increasing accessibility of SAR data, potentials for multi-frequency remote sensing are becoming numerous. In our study, we evaluate the synergistic contribution of multitemporal L-, C-, and X-band data to tropical land cover mapping. We compare classification outcomes of ALOS-2, RADARSAT-2, and TerraSAR-X datasets for a study site in the Brazilian Amazon using a wrapper approach. After preprocessing and calculation of GLCM texture (Grey Level Co-Occurence), the wrapper utilizes Random Forest classifications to estimate scene importance. Comparing the contribution of different wavelengths, ALOS-2 data perform best in terms of overall classification accuracy, while the classification of TerraSAR-X data yields higher accuracies when compared to the results achieved by RADARSAT-2. Moreover, the wrapper underlines potentials of multi-frequency classification as integration of multi-frequency images is always preferred over multi-temporal, mono-frequent composites. We conclude that, despite distinct advantages of certain sensors, for land cover classification, multi- sensoral integration is beneficial

    Mapping Chestnut Stands Using Bi-Temporal VHR Data

    Get PDF
    This study analyzes the potential of very high resolution (VHR) remote sensing images and extended morphological profiles for mapping Chestnut stands on Tenerife Island (Canary Islands, Spain). Regarding their relevance for ecosystem services in the region (cultural and provisioning services) the public sector demand up-to-date information on chestnut and a simple straight-forward approach is presented in this study. We used two VHR WorldView images (March and May 2015) to cover different phenological phases. Moreover, we included spatial information in the classification process by extended morphological profiles (EMPs). Random forest is used for the classification process and we analyzed the impact of the bi-temporal information as well as of the spatial information on the classification accuracies. The detailed accuracy assessment clearly reveals the benefit of bi-temporal VHR WorldView images and spatial information, derived by EMPs, in terms of the mapping accuracy. The bi-temporal classification outperforms or at least performs equally well when compared to the classification accuracies achieved by the mono-temporal data. The inclusion of spatial information by EMPs further increases the classification accuracy by 5% and reduces the quantity and allocation disagreements on the final map. Overall the new proposed classification strategy proves useful for mapping chestnut stands in a heterogeneous and complex landscape, such as the municipality of La Orotava, Tenerife

    RANDOM FORESTS FOR CLASSIFYING MULTI-TEMPORAL SAR DATA

    Get PDF
    The accuracy of supervised land cover classifications depends on several factors like the chosen algorithm, adequate training data and the selection of features. In regard to multi-temporal remote sensing imagery statistical classifier are often not applicable. In the study presented here, a Random Forest was applied to a SAR data set, consisting of 15 acquisitions. A detailed accuracy assessment shows that the Random Forest significantly increases the efficiency of the single decision tree and can outperform other classifiers in terms of accuracy. A visual interpretation confirms the statistical accuracy assessment. The imagery is classified into more homogeneous regions and the noise is significantly decreased. The additional time needed for the generation of Random Forests is little and can be justified. It is still a lot faster than other state-of-the-art classifiers. 1
    corecore