3,970 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    The integration of freely available medium resolution optical sensors with Synthetic Aperture Radar (SAR) imagery capabilities for American bramble (Rubus cuneifolius) invasion detection and mapping.

    Get PDF
    Doctoral Degree. University of KwaZulu- Natal, Pietermaritzburg.The emergence of American bramble (Rubus cuneifolius) across South Africa has caused severe ecological and economic damage. To date, most of the efforts to mitigate its effects have been largely unsuccessful due to its prolific growth and widespread distribution. Accurate and timeous detection and mapping of Bramble is therefore critical to the development of effective eradication management plans. Hence, this study sought to determine the potential of freely available, new generation medium spatial resolution satellite imagery for the detection and mapping of American Bramble infestations within the UNESCO world heritage site of the uKhahlamba Drakensberg Park (UDP). The first part of the thesis determined the potential of conventional freely available remote sensing imagery for the detection and mapping of Bramble. Utilizing the Support Vector Machine (SVM) learning algorithm, it was established that Bramble could be detected with limited users (45%) and reasonable producers (80%) accuracies. Much of the confusion occurred between the grassland land cover class and Bramble. The second part of the study focused on fusing the new age optical imagery and Synthetic Aperture Radar (SAR) imagery for Bramble detection and mapping. The synergistic potential of fused imagery was evaluated using multiclass SVM classification algorithm. Feature level image fusion of optical imagery and SAR resulted in an overall classification accuracy of 76%, with increased users and producers’ accuracies for Bramble. These positive results offered an opportunity to explore the polarization variables associated with SAR imagery for improved classification accuracies. The final section of the study dwelt on the use of Vegetation Indices (VIs) derived from new age satellite imagery, in concert with SAR to improve Bramble classification accuracies. Whereas improvement in classification accuracies were minimal, the potential of stand-alone VIs to detect and map Bramble (80%) was noteworthy. Lastly, dual-polarized SAR was fused with new age optical imagery to determine the synergistic potential of dual-polarized SAR to increase Bramble mapping accuracies. Results indicated a marked increase in overall Bramble classification accuracy (85%), suggesting improved potential of dual-polarized SAR and optical imagery in invasive species detection and mapping. Overall, this study provides sufficient evidence of the complimentary and synergistic potential of active and passive remote sensing imagery for invasive alien species detection and mapping. Results of this study are important for supporting contemporary decision making relating to invasive species management and eradication in order to safeguard ecological biodiversity and pristine status of nationally protected areas

    HIRIS (High-Resolution Imaging Spectrometer: Science opportunities for the 1990s. Earth observing system. Volume 2C: Instrument panel report

    Get PDF
    The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements

    Assessment of multi-temporal, multi-sensor radar and ancillary spatial data for grasslands monitoring in Ireland using machine learning approaches

    Get PDF
    Accurate inventories of grasslands are important for studies of carbon dynamics, biodiversity conservation and agricultural management. For regions with persistent cloud cover the use of multi-temporal synthetic aperture radar (SAR) data provides an attractive solution for generating up-to-date inventories of grasslands. This is even more appealing considering the data that will be available from upcoming missions such as Sentinel-1 and ALOS-2. In this study, the performance of three machine learning algorithms; Random Forests (RF), Support Vector Machines (SVM) and the relatively underused Extremely Randomised Trees (ERT) is evaluated for discriminating between grassland types over two large heterogeneous areas of Ireland using multi-temporal, multi-sensor radar and ancillary spatial datasets. A detailed accuracy assessment shows the efficacy of the three algorithms to classify different types of grasslands. Overall accuracies ≄ 88.7% (with kappa coefficient of 0.87) were achieved for the single frequency classifications and maximum accuracies of 97.9% (kappa coefficient of 0.98) for the combined frequency classifications. For most datasets, the ERT classifier outperforms SVM and RF

    Co-Orbital Sentinel 1 and 2 for LULC Mapping with Emphasis on Wetlands in a Mediterranean Setting Based on Machine Learning

    Get PDF
    This study aimed at evaluating the synergistic use of Sentinel-1 and Sentinel-2 data combined with the Support Vector Machines (SVMs) machine learning classifier for mapping land use and land cover (LULC) with emphasis on wetlands. In this context, the added value of spectral information derived from the Principal Component Analysis (PCA), Minimum Noise Fraction (MNF) and Grey Level Co-occurrence Matrix (GLCM) to the classification accuracy was also evaluated. As a case study, the National Park of Koronia and Volvi Lakes (NPKV) located in Greece was selected. LULC accuracy assessment was based on the computation of the classification error statistics and kappa coefficient. Findings of our study exemplified the appropriateness of the spatial and spectral resolution of Sentinel data in obtaining a rapid and cost-effective LULC cartography, and for wetlands in particular. The most accurate classification results were obtained when the additional spectral information was included to assist the classification implementation, increasing overall accuracy from 90.83% to 93.85% and kappa from 0.894 to 0.928. A post-classification correction (PCC) using knowledge-based logic rules further improved the overall accuracy to 94.82% and kappa to 0.936. This study provides further supporting evidence on the suitability of the Sentinels 1 and 2 data for improving our ability to map a complex area containing wetland and non-wetland LULC classes

    Evaluation of space SAR as a land-cover classification

    Get PDF
    The multidimensional approach to the mapping of land cover, crops, and forests is reported. Dimensionality is achieved by using data from sensors such as LANDSAT to augment Seasat and Shuttle Image Radar (SIR) data, using different image features such as tone and texture, and acquiring multidate data. Seasat, Shuttle Imaging Radar (SIR-A), and LANDSAT data are used both individually and in combination to map land cover in Oklahoma. The results indicates that radar is the best single sensor (72% accuracy) and produces the best sensor combination (97.5% accuracy) for discriminating among five land cover categories. Multidate Seasat data and a single data of LANDSAT coverage are then used in a crop classification study of western Kansas. The highest accuracy for a single channel is achieved using a Seasat scene, which produces a classification accuracy of 67%. Classification accuracy increases to approximately 75% when either a multidate Seasat combination or LANDSAT data in a multisensor combination is used. The tonal and textural elements of SIR-A data are then used both alone and in combination to classify forests into five categories

    Implementing an object-based multi-index protocol for mapping surface glacier facies from Chandra-Bhaga basin, Himalaya

    Get PDF
    Surface glacier facies are superficial expressions of a glacier that are distinguishable based on differing spectral and structural characteristics according to their age and inter-mixed impurities. Increasing bodies of literature suggest that the varying properties of surface glacier facies differentially influence the melt of the glacier, thus affecting the mass balance. Incorporating these variations into distributed mass balance modelling can improve the perceived accuracy of these models. However, detecting and subsequently mapping these facies with a high degree of accuracy is a necessary precursor to such complex modelling. The variations in the reflectance spectra of various glacier facies permit multiband imagery to exploit band ratios for their effective extraction. However, coarse and medium spatial resolution multispectral imagery can delimit the efficacy of band ratioing by muddling the minor spatial and spectral variations of a glacier. Very high-resolution imagery, on the other hand, creates distortions in the conventionally obtained information extracted through pixel-based classification. Therefore, robust and adaptable methods coupled with higher resolution data products are necessary to effectively map glacier facies. This study endeavours to identify and isolate glacier facies on two unnamed glaciers in the Chandra-Bhaga basin, Himalayas, using an established object-based multi-index protocol. Exploiting the very high resolution offered by WorldView-2 and its eight spectral bands, this study implements customized spectral index ratios via an object-based environment. Pixel-based supervised classification is also performed using three popular classifiers to comparatively gauge the classification accuracies. The object-based multi-index protocol delivered the highest overall accuracy of 86.67%. The Minimum Distance classifier yielded the lowest overall accuracy of 62.50%, whereas, the Mahalanobis Distance and Maximum Likelihood classifiers yielded overall accuracies of 77.50% and 70.84% respectively. The results outline the superiority of the object-based method for extraction of glacier facies. Forthcoming studies must refine the indices and test their applicability in wide ranging scenarios

    Spaceborne L-Band Synthetic Aperture Radar Data for Geoscientific Analyses in Coastal Land Applications: A Review

    Get PDF
    The coastal zone offers among the world’s most productive and valuable ecosystems and is experiencing increasing pressure from anthropogenic impacts: human settlements, agriculture, aquaculture, trade, industrial activities, oil and gas exploitation and tourism. Earth observation has great capability to deliver valuable data at the local, regional and global scales and can support the assessment and monitoring of land‐ and water‐related applications in coastal zones. Compared to optical satellites, cloud‐cover does not limit the timeliness of data acquisition with spaceborne Synthetic Aperture Radar (SAR) sensors, which have all‐weather, day and night capabilities. Hence, active radar systems demonstrate great potential for continuous mapping and monitoring of coastal regions, particularly in cloud‐prone tropical and sub‐tropical climates. The canopy penetration capability with long radar wavelength enables L‐band SAR data to be used for coastal terrestrial environments and has been widely applied and investigated for the following geoscientific topics: mapping and monitoring of flooded vegetation and inundated areas; the retrieval of aboveground biomass; and the estimation of soil moisture. Human activities, global population growth, urban sprawl and climate change‐induced impacts are leading to increased pressure on coastal ecosystems causing land degradation, deforestation and land use change. This review presents a comprehensive overview of existing research articles that apply spaceborne L‐band SAR data for geoscientific analyses that are relevant for coastal land applications

    Investigation of Coastal Vegetation Dynamics and Persistence in Response to Hydrologic and Climatic Events Using Remote Sensing

    Get PDF
    Coastal Wetlands (CW) provide numerous imperative functions and provide an economic base for human societies. Therefore, it is imperative to track and quantify both short and long-term changes in these systems. In this dissertation, CW dynamics related to hydro-meteorological signals were investigated using a series of LANDSAT-derived normalized difference vegetation index (NDVI) data and hydro-meteorological time-series data in Apalachicola Bay, Florida, from 1984 to 2015. NDVI in forested wetlands exhibited more persistence compared to that for scrub and emergent wetlands. NDVI fluctuations generally lagged temperature by approximately three months, and water level by approximately two months. This analysis provided insight into long-term CW dynamics in the Northern Gulf of Mexico. Long-term studies like this are dependent on optical remote sensing data such as Landsat which is frequently partially obscured due to clouds and this can that makes the time-series sparse and unusable during meteorologically active seasons. Therefore, a multi-sensor, virtual constellation method is proposed and demonstrated to recover the information lost due to cloud cover. This method, named Tri-Sensor Fusion (TSF), produces a simulated constellation for NDVI by integrating data from three compatible satellite sensors. The visible and near-infrared (VNIR) bands of Landsat-8 (L8), Sentinel-2, and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were utilized to map NDVI and to compensate each satellite sensor\u27s shortcomings in visible coverage area. The quantitative comparison results showed a Root Mean Squared Error (RMSE) and Coefficient of Determination (R2) of 0.0020 sr-1 and 0.88, respectively between true observed and fused L8 NDVI. Statistical test results and qualitative performance evaluation suggest that TSF was able to synthesize the missing pixels accurately in terms of the absolute magnitude of NDVI. The fusion improved the spatial coverage of CWs reasonably well and ultimately increases the continuity of NDVI data for long term studies
    • 

    corecore