511 research outputs found

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Commercial forest species discrimination and mapping using cost effective multispectral remote sensing in midlands region of KwaZulu-Natal province, South Africa.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg, 2018.Discriminating forest species is critical for generating accurate and reliable information necessary for sustainable management and monitoring of forests. Remote sensing has recently become a valuable source of information in commercial forest management. Specifically, high spatial resolution sensors have increasingly become popular in forests mapping and management. However, the utility of such sensors is costly and have limited spatial coverage, necessitating investigation of cost effective, timely and readily available new generation sensors characterized by larger swath width useful for regional mapping. Therefore, this study sought to discriminate and map commercial forest species (i.e. E. dunii, E.grandis, E.mix, A.mearnsii, P.taedea and P.tecunumanii, P.elliotte) using cost effective multispectral sensors. The first objective of this study was to evaluate the utility of freely available Landsat 8 Operational Land Imager (OLI) in mapping commercial forest species. Using Partial Least Square Discriminant Analysis algorithm, results showed that Landsat 8 OLI and pan-sharpened version of Landsat 8 OLI image achieved an overall classification accuracy of 79 and 77.8%, respectively, while WorldView-2 used as a benchmark image, obtained 86.5%. Despite low spatial of resolution 30 m, result show that Landsat 8 OLI was reliable in discriminating forest species with reasonable and acceptable accuracy. This freely available imagery provides cheaper and accessible alternative that covers larger swath-width, necessary for regional and local forests assessment and management. The second objective was to examine the effectiveness of Sentinel-1 and 2 for commercial forest species mapping. With the use of Linear Discriminant Analysis, results showed an overall accuracy of 84% when using Sentinel 2 raw image as a standalone data. However, when Sentinel 2 was fused with Sentinel’s 1 Synthetic Aperture Radar (SAR) data, the overall accuracy increased to 88% using Vertical transmit/Horizontal receive (VH) polarization and 87% with Vertical transmit/Vertical receive (VV) polarization datasets. The utility of SAR data demonstrates capability for complementing Sentinel-2 multispectral imagery in forest species mapping and management. Overall, newly generated and readily available sensors demonstrated capability to accurately provide reliable information critical for mapping and monitoring of commercial forest species at local and regional scales

    Fusing Small-footprint Waveform LiDAR and Hyperspectral Data for Canopy-level Species Classification and Herbaceous Biomass Modeling in Savanna Ecosystems

    Get PDF
    The study of ecosystem structure, function, and composition has become increasingly important in order to gain a better understanding of how impacts wrought by natural disturbances, climate, and human activity can alter ecosystem services provided to a population. Research groups at Rochester Institute of Technology and Carnegie Institution for Science are focusing on characterization of savanna ecosystems and are using data from the Carnegie Airborne Observatory (CAO), which integrates advanced imaging spectroscopy and waveform light detection and ranging (wLiDAR) data. This component of the larger ecosystem project has as a goal the fusion of imaging spectroscopy and small-footprint wLiDAR data in order to improve per-species structural parameter estimation towards classication and herbaceous biomass modeling. Waveform LiDAR has proven useful for extracting high vertical resolution structural parameters, while imaging spectroscopy is a well-established tool for species classication and biochemistry assessment. We hypothesize that the two modalities provide complementary information that could improve per-species structural assessment, species classication, and herbaceous biomass modeling when compared to single modality sensing systems. We explored a statistical approach to data fusion at the feature level, which hinged on our ability to reduce structural and spectral data dimensionality to those data features best suited to assessing these complex systems. The species classification approach was based on stepwise discrimination analysis (SDA) and used feature metrics from hyperspectral imagery (HSI) combined with wLiDAR data, which could help nding correlated features, and in turn improve classiers. It was found that fusing data with the SDA did not improve classication signicantly, especially compared to the HSI classication results. The overall classication accuracies were 53% for both original and PCA-based wLiDAR variables, 73% for the original HSI variables, 71% for PCA-based HSI variables, 73% for the original fusion of wLiDAR and HSI data set, and 74% for the PCA-based fusion variables. The kappa coecients achieved with the original and PCA-based wLiDAR variable classications were 0.41 and 0.44, respectively. For both original and PCA-based HSI classications, the kappa coecients were 0.63 and 0.60, respectively and 0.62 and 0.64 for original and PCA-based fusion variable classications, respectively. These results show that HSI was more successful in grouping important information in a smaller number of variables than wLiDAR and thus inclusion of structural information did not signicantly improve the classication. As for herbaceous biomass modeling, the statistical approach used for the fusion of wLiDAR and HSI was forward selection modeling (FSM), which selects signicant independent metrics and models those to measured biomass. The results were measured in R2 and RMSE, which indicate the similar ndings. Waveform LiDAR performed the poorest with an R2 of 0.07 for original wLiDAR variables and 0.12 for PCA-based wLiDAR variables. The respective RMSE were 19.99 and 19.41. For both original and PCA-based HSI variables, the results were better with R2 of 0.32 and 0.27 and RMSE of 17.27 and 17.80, respectively. For the fusion of original and PCA-based data, the results were comparable to HSI, with R2 values of 0.35 and 0.29 and RMSE of 16.88 and 17.59, respectively. These results indicate that small scale wLiDAR may not be able to provide accurate measurement of herbaceous biomass, although other factors could have contributed to the relatively poor results, such as the senescent state of grass by April 2008, the narrow biomass range that was measured, and the low biomass values, i.e., the limited laser-target interactions. We concluded that although fusion did not result in signicant improvements over single modality approaches in those two use cases, there is a need for further investigation during peak growing season

    LiDAR-Guided Cross-Attention Fusion for Hyperspectral Band Selection and Image Classification

    Get PDF
    The fusion of hyperspectral and light detection and range (LiDAR) data has been an active research topic. Existing fusion methods have ignored the high-dimensionality and redundancy challenges in hyperspectral images (HSIs), despite that band selection methods have been intensively studied for HSI processing. This article addresses this significant gap by introducing a cross-attention mechanism from the transformer architecture for the selection of HSI bands guided by LiDAR data. LiDAR provides high-resolution vertical structural information, which can be useful in distinguishing different types of land cover that may have similar spectral signatures but different structural profiles. In our approach, the LiDAR data are used as the “query” to search and identify the “key” from the HSI to choose the most pertinent bands for LiDAR. This method ensures that the selected HSI bands drastically reduce redundancy and computational requirements while working optimally with the LiDAR data. Extensive experiments have been undertaken on three paired HSI and LiDAR datasets: Houston 2013, Trento, and MUUFL. The results highlight the superiority of the cross-attention mechanism, underlining the enhanced classification accuracy of the identified HSI bands when fused with the LiDAR features. The results also show that the use of fewer bands combined with LiDAR surpasses the performance of state-of-the-art fusion models

    Ash Tree Identification Based on the Integration of Hyperspectral Imagery and High-density Lidar Data

    Get PDF
    Monitoring and management of ash trees has become particularly important in recent years due to the heightened risk of attack from the invasive pest, the emerald ash borer (EAB). However, distinguishing ash from other deciduous trees can be challenging. Both hyperspectral imagery and Light detection and ranging (LiDAR) data are two valuable data sources that are often used for tree species classification. Hyperspectral imagery measures detailed spectral reflectance related to the biochemical properties of vegetation, while LiDAR data measures the three-dimensional structure of tree crowns related to morphological characteristics. Thus, the accuracy of vegetation classification may be improved by combining both techniques. Therefore, the objective of this research is to integrate hyperspectral imagery and LiDAR data for improving ash tree identification. Specifically, the research aims include: 1) using LiDAR data for individual tree crowns segmentation; 2) using hyperspectral imagery for extraction of relative pure crown spectra; 3) fusing hyperspectral and LiDAR data for ash tree identification. It is expected that the classification accuracy of ash trees will be significantly improved with the integration of hyperspectral and LiDAR techniques. Analysis results suggest that, first, 3D crown structures of individual trees can be reconstructed using a set of generalized geometric models which optimally matched LiDAR-derived raster image, and crown widths can be further estimated using tree height and shape-related parameters as independent variables and ground measurement of crown widths as dependent variables. Second, with constrained linear spectral mixture analysis method, the fractions of all materials within a pixel can be extracted, and relative pure crown-scale spectra can be further calculated using illuminated-leaf fraction as weighting factors for tree species classification. Third, both crown shape index (SI) and coefficient of variation (CV) can be extracted from LiDAR data as invariant variables in tree’s life cycle, and improve ash tree identification by integrating with pixel-weighted crown spectra. Therefore, three major contributions of this research have been made in the field of tree species classification:1) the automatic estimation of individual tree crown width from LiDAR data by combining a generalized geometric model and a regression model, 2) the computation of relative pure crown-scale spectral reflectance using a pixel-weighting algorithm for tree species classification, 3) the fusion of shape-related structural features and pixel-weighted crown-scale spectral features for improving of ash tree identification

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Fusion of hyperspectral, multispectral, color and 3D point cloud information for the semantic interpretation of urban environments

    Get PDF
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set
    • …
    corecore