3,423 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    Wavelet-based fusion of SPOT/VEGETATION and Evisat/Wide Swath data applied to wetland mapping

    Get PDF

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    The integration of freely available medium resolution optical sensors with Synthetic Aperture Radar (SAR) imagery capabilities for American bramble (Rubus cuneifolius) invasion detection and mapping.

    Get PDF
    Doctoral Degree. University of KwaZulu- Natal, Pietermaritzburg.The emergence of American bramble (Rubus cuneifolius) across South Africa has caused severe ecological and economic damage. To date, most of the efforts to mitigate its effects have been largely unsuccessful due to its prolific growth and widespread distribution. Accurate and timeous detection and mapping of Bramble is therefore critical to the development of effective eradication management plans. Hence, this study sought to determine the potential of freely available, new generation medium spatial resolution satellite imagery for the detection and mapping of American Bramble infestations within the UNESCO world heritage site of the uKhahlamba Drakensberg Park (UDP). The first part of the thesis determined the potential of conventional freely available remote sensing imagery for the detection and mapping of Bramble. Utilizing the Support Vector Machine (SVM) learning algorithm, it was established that Bramble could be detected with limited users (45%) and reasonable producers (80%) accuracies. Much of the confusion occurred between the grassland land cover class and Bramble. The second part of the study focused on fusing the new age optical imagery and Synthetic Aperture Radar (SAR) imagery for Bramble detection and mapping. The synergistic potential of fused imagery was evaluated using multiclass SVM classification algorithm. Feature level image fusion of optical imagery and SAR resulted in an overall classification accuracy of 76%, with increased users and producers’ accuracies for Bramble. These positive results offered an opportunity to explore the polarization variables associated with SAR imagery for improved classification accuracies. The final section of the study dwelt on the use of Vegetation Indices (VIs) derived from new age satellite imagery, in concert with SAR to improve Bramble classification accuracies. Whereas improvement in classification accuracies were minimal, the potential of stand-alone VIs to detect and map Bramble (80%) was noteworthy. Lastly, dual-polarized SAR was fused with new age optical imagery to determine the synergistic potential of dual-polarized SAR to increase Bramble mapping accuracies. Results indicated a marked increase in overall Bramble classification accuracy (85%), suggesting improved potential of dual-polarized SAR and optical imagery in invasive species detection and mapping. Overall, this study provides sufficient evidence of the complimentary and synergistic potential of active and passive remote sensing imagery for invasive alien species detection and mapping. Results of this study are important for supporting contemporary decision making relating to invasive species management and eradication in order to safeguard ecological biodiversity and pristine status of nationally protected areas

    Extraction of forest plantation extents using majority voting classification fusion algorithm

    Full text link
    © 2018 Proceedings - 39th Asian Conference on Remote Sensing: Remote Sensing Enabling Prosperity, ACRS 2018 Satellite Phased Array L-band Synthetic Aperture Radar-2 has great advantages in extracting natural and industrial forest plantation in tropical areas, but it suffers from presence of speckle that create problem to identify the forest body. Optimal fusion of Landsat-8 operational land imager bands with ALOS PALSAR-2 can provide the ideal complementary information for an accurate forest extraction while suppressing unwanted information. The goal of this study is to analyze the potential ability of Landsat-8 OLI and ALOS PALSAR-2 as complementary data resources in order to extract land cover especially forest types. Comprehensive preprocessing analysis (e.g. geometric correction, filtering enhancement and polarization combination) were conducted on ALOS PALSAR-2 dataset in order to make the imagery ready for processing. Principal component index method as one of the most effective Pan-Sharpening fusion approaches was used to synthesize Landsat and ALOS PALSAR-2 images. Three different classifiers methods (support vector machine, k-nearest neighborhood, and random forest) were employed and then fused by majority voting algorithm to generate more robust and precise classification result. Accuracy of the final fused result was assessed on the basis of ground truth points by using confusion matrices and kappa coefficient. This study proves that the accurate and reliable majority voting fusion method can be used to extract large-scale land cover with emphasis on natural and industrial forest plantation from synthetic aperture radar and optical datasets

    Comparison of layer-stacking and Dempster-Shafer theory-based methods using Sentinel-1 and Sentinel-2 data fusion in urban land cover mapping

    Get PDF
    Data fusion has shown potential to improve the accuracy of land cover mapping, and selection of the optimal fusion technique remains a challenge. This study investigated the performance of fusing Sentinel-1 (S-1) and Sentinel-2 (S-2) data, using layer-stacking method at the pixel level and Dempster-Shafer (D-S) theory-based approach at the decision level, for mapping six land cover classes in Thu Dau Mot City, Vietnam. At the pixel level, S-1 and S-2 bands and their extracted textures and indices were stacked into the different single-sensor and multi-sensor datasets (i.e. fused datasets). The datasets were categorized into two groups. One group included the datasets containing only spectral and backscattering bands, and the other group included the datasets consisting of these bands and their extracted features. The random forest (RF) classifier was then applied to the datasets within each group. At the decision level, the RF classification outputs of the single-sensor datasets within each group were fused together based on D-S theory. Finally, the accuracy of the mapping results at both levels within each group was compared. The results showed that fusion at the decision level provided the best mapping accuracy compared to the results from other products within each group. The highest overall accuracy (OA) and Kappa coefficient of the map using D-S theory were 92.67% and 0.91, respectively. The decision-level fusion helped increase the OA of the map by 0.75% to 2.07% compared to that of corresponding S-2 products in the groups. Meanwhile, the data fusion at the pixel level delivered the mapping results, which yielded an OA of 4.88% to 6.58% lower than that of corresponding S-2 products in the groups

    JERS-1 SAR and LANDSAT-5 TM image data fusion: An application approach for lithological mapping

    Get PDF
    Satellite image data fusion is an image processing set of procedures utilise either for image optimisation for visual photointerpretation, or for automated thematic classification with low error rate and high accuracy. Lithological mapping using remote sensing image data relies on the spectral and textural information of the rock units of the area to be mapped. These pieces of information can be derived from Landsat optical TM and JERS-1 SAR images respectively. Prior to extracting such information (spectral and textural) and fusing them together, geometric image co-registration between TM and the SAR, atmospheric correction of the TM, and SAR despeckling are required. In this thesis, an appropriate atmospheric model is developed and implemented utilising the dark pixel subtraction method for atmospheric correction. For SAR despeckling, an efficient new method is also developed to test whether the SAR filter used remove the textural information or not. For image optimisation for visual photointerpretation, a new method of spectral coding of the six bands of the optical TM data is developed. The new spectral coding method is used to produce efficient colour composite with high separability between the spectral classes similar to that if the whole six optical TM bands are used together. This spectral coded colour composite is used as a spectral component, which is then fused with the textural component represented by the despeckled JERS-1 SAR using the fusion tools, including the colour transform and the PCT. The Grey Level Cooccurrence Matrix (GLCM) technique is used to build the textural data set using the speckle filtered JERS-1 SAR data making seven textural GLCM measures. For automated thematic mapping and by the use of both the six TM spectral data and the seven textural GLCM measures, a new method of classification has been developed using the Maximum Likelihood Classifier (MLC). The method is named the sequential maximum likelihood classification and works efficiently by comparison the classified textural pixels, the classified spectral pixels, and the classified textural-spectral pixels, and gives the means of utilising the textural and spectral information for automated lithological mapping

    Fusion of Multisource Images for Update of Urban GIS

    Get PDF

    Enhancing Landsat time series through multi-sensor fusion and integration of meteorological data

    Get PDF
    Over 50 years ago, the United States Interior Secretary, Stewart Udall, directed space agencies to gather "facts about the natural resources of the earth." Today global climate change and human modification make earth observations from all variety of sensors essential to understand and adapt to environmental change. The Landsat program has been an invaluable source for understanding the history of the land surface, with consistent observations from the Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) sensors since 1982. This dissertation develops and explores methods for enhancing the TM/ETM+ record by fusing other data sources, specifically, Landsat 8 for future continuity, radar data for tropical forest monitoring, and meteorological data for semi-arid vegetation dynamics. Landsat 8 data may be incorporated into existing time series of Landsat 4-7 data for applications like change detection, but vegetation trend analysis requires calibration, especially when using the near-infrared band. The improvements in radiometric quality and cloud masking provided by Landsat 8 data reduce noise compared to previous sensors. Tropical forests are notoriously difficult to monitor with Landsat alone because of clouds. This dissertation developed and compared two approaches for fusing Synthetic Aperture Radar (SAR) data from the Advanced Land Observation Satellite (ALOS-1) with Landsat in Peru, and found that radar data increased accuracy of deforestation. Simulations indicate that the benefit of using radar data increased with higher cloud cover. Time series analysis of vegetation indices from Landsat in semi-arid environments is complicated by the response of vegetation to high variability in timing and amount of precipitation. We found that quantifying dynamics in precipitation and drought index data improved land cover change detection performance compared to more traditional harmonic modeling for grasslands and shrublands in California. This dissertation enhances the value of Landsat data by combining it with other data sources, including other optical sensors, SAR data, and meteorological data. The methods developed here show the potential for data fusion and are especially important in light of recent and upcoming missions, like Sentinel-1, Sentinel-2, and NASA-ISRO Synthetic Aperture Radar (NISAR)
    corecore