283 research outputs found

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Image Classification in Remote Sensing

    Get PDF
    One of the most important functions of remote sensing data is the production of Land Use and Land Cover maps and thus can be managed through a process called image classification. This paper looks into the following components related to the image classification process and procedures and image classification techniques and explains two common techniques K-means Classifier and Support Vector Machine (SVM). Keywords: Remote Sensing, Image Classification, K-means Classifier, Support Vector Machin

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    Comparison of OLI and TM Multi-spectral Satellite Imagery Land-use and Land-cover Mapping Using Hierarchical Concept of Earth Surface Matrix

    Get PDF
    The present study compares the capabilities of Thematic Mapper (TM) and Operational Land Imager (OLI) sensors of Landsat satellite and analyzes the results of image classification on their multi-spectral data. To achieve this, LANDSAT 5-TM (2011) and LANDSAT 8-OLI (2016) imageries were used to map the land-use and land-cover for a study area located on Pelasjan sub-basin, in Isfahan, Iran. First, radiometric and atmospheric corrections were performed, and then the overall status of the area was determined by reviewing topographic maps, visual interpretation of the satellite imageries and field studies.  Consequently, a three-level land matrix hierarchy including 1) General level, 2) Mid-level, and 3) The level of details was established. Land matrix hierarchy maps were produced with proper methods using hybrid classification. The comparative analysis in this study showed that the hybrid classification method generates accurate results from the OLI sensor data in comparison to TM imageries. This was particularly evident for residential areas, irrigated agriculture, rain-fed agriculture, sparse, and dense rangelands. Although the results of image classification showed more accuracy for the OLI imagery, the error matrix in Z-test did not identify any statistically significant difference between the two datasets. This highlights the importance of image classification method selection, which can overcome the possible limitations of satellite imageries in land-use and land-cover mapping. Keywords: OLI and TM Sensors, land matrix hierarchy, hybrid classification, LULC, error matrix

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

    Performance analysis of change detection techniques for land use land cover

    Get PDF
    Remotely sensed satellite images have become essential to observe the spatial and temporal changes occurring due to either natural phenomenon or man-induced changes on the earth’s surface. Real time monitoring of this data provides useful information related to changes in extent of urbanization, environmental changes, water bodies, and forest. Through the use of remote sensing technology and geographic information system tools, it has become easier to monitor changes from past to present. In the present scenario, choosing a suitable change detection method plays a pivotal role in any remote sensing project. Previously, digital change detection was a tedious task. With the advent of machine learning techniques, it has become comparatively easier to detect changes in the digital images. The study gives a brief account of the main techniques of change detection related to land use land cover information. An effort is made to compare widely used change detection methods used to identify changes and discuss the need for development of enhanced change detection methods
    corecore