575 research outputs found

    Multi-Fusion algorithms for Detecting Land Surface Pattern Changes Using Multi-High Spatial Resolution Images and Remote Sensing Analysis

    Get PDF
    Producing accurate Land-Use and Land-Cover (LU/LC) maps using low-spatial-resolution images is a difficult task. Pan-sharpening is crucial for estimating LU/LC patterns. This study aimed to identify the most precise procedure for estimating LU/LC by adopting two fusion approaches, namely Color Normalized Brovey (BM) and Gram-Schmidt Spectral Sharpening (GS), on high-spatial-resolution Multi-sensor and Multi-spectral images, such as (1) the Unmanned Aerial Vehicle (UAV) system, (2) the WorldView-2 satellite system, and (3) low-spatial-resolution images like the Sentinel-2 satellite, to generate six levels of fused images with the three original multi-spectral images. The Maximum Likelihood method (ML) was used for classifying all nine images. A confusion matrix was used to evaluate the accuracy of each single classified image. The obtained results were statistically compared to determine the most reliable, accurate, and appropriate LU/LC map and procedure. It was found that applying GS to the fused image, which integrated WorldView-2 and Sentinel-2 satellite images and was classified by the ML method, produced the most accurate results. This procedure has an overall accuracy of 88.47% and a kappa coefficient of 0.85. However, the overall accuracies of the three classified multispectral images range between 86.84% to 76.49%. Furthermore, the accuracy assessment of the fused images by the Brovey method and the rest of the GS method and classified by the ML method ranges between 85.75% to 76.68%. This proposed procedure shows a lot of promise in the academic sphere for mapping LU/LC. Previous researchers have mostly used satellite images or datasets with similar spatial and spectral resolution, at least for tropical areas like the study area of this research, to detect land surface patterns. However, no one has previously investigated and examined the use and application of different datasets that have different spectral and spatial resolutions and their accuracy for mapping LU/LC. This study has successfully adopted different datasets provided by different sensors with varying spectral and spatial levels to investigate this. Doi: 10.28991/ESJ-2023-07-04-013 Full Text: PD

    Extraction of Information from Multispectral and PAN of Landsat Image for Land Use Classification in the Case of Sodozuria Woreda, Wolaita Sodo, Ethiopia

    Get PDF
    High-resolution and multispectral remote sensing images are an important data source for acquiring geospatial information for a variety of applications. The satellite images at different spectral and spatial resolutions with the aid of image processing techniques can improve the quality of information. More specifically, image fusion is very helpful to extract the spatial information from two images of different spatial and spectral images of same area. The Image fusion techniques are also helpful in providing classification accurately. In order to improve the information contents of the remote sensing satellite images at a specific spatial resolution, the different resolution image fusion techniques like Wavelet, PC and IHS have been used to combine panchromatic and multispectral datasets of Landsat ETM+ for the purpose of information extraction. The image under study has been used to identify existing Land use types and perform supervised classification. It has then been identified that forest land, farm land, bare land and built-up area are the most dominant land uses in the study area. Based on the supervised classification, classification accuracy assessment has indicated that Original image (MS) produced 83.33% overall accuracy and 0.7500 Kappa coefficient, PC fused image produced 91.67% overall accuracy and 0.875 Kappa coefficient, IHS fused image produced 86.67% overall accuracy and 0.800 Kappa coefficient, Wavelet-PC based transformation produced 91.67% overall accuracy  and   0.875 Kappa coefficient and Wavelet-HIS based  transformation produced 98.33% overall accuracy and 0.975 Kappa coefficient. Moreover, Wavelet-HIS based transformation method has produced relatively higher accuracy. Generally, based on the overall accuracy and kappa coefficient, fused images in terms of classification accuracy at the expense of information content perform by far better than the original image.

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    A comparison of multisensor integration methods for land cover classification in the Brazilian Amazon.

    Get PDF
    Many data fusion methods are available, but it is poorly understood which fusion method is suitable for integrating Landsat Thematic Mapper (TM) and radar data for land cover classification. This research explores the integration of Landsat TM and radar images (i.e., ALOS PALSAR L-band and RADARSAT-2 C-band) for land cover classification in a moist tropical region of the Brazilian Amazon. Different data fusion methods?principal component analysis (PCA), wavelet-merging technique (Wavelet), high-pass filter resolution-merging (HPF), and normalized multiplication (NMM)?were explored. Land cover classification was conducted with maximum likelihood classification based on different scenarios. This research indicates that individual radar data yield much poorer land cover classifications than TM data, and PALSAR L-band data perform relatively better than RADARSAT-2 C-band data. Compared to the TM data, the Wavelet multisensor fusion improved overall classification by 3.3%?5.7%, HPF performed similarly, but PCA and NMM reduced overall classification accuracy by 5.1%?6.1% and 7.6% ?12.7%, respectively. Different polarization options, such as HH and HV, work similarly when used in data fusion. This research underscores the importance of selecting a suitable data fusion method that can preserve spectral fidelity while improving spatial resolution

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references
    corecore