785 research outputs found

    Exploring Data Mining Techniques for Tree Species Classification Using Co-Registered LiDAR and Hyperspectral Data

    Full text link
    NASA Goddard’s LiDAR, Hyperspectral, and Thermal imager provides co-registered remote sensing data on experimental forests. Data mining methods were used to achieve a final tree species classification accuracy of 68% using a combined LiDAR and hyperspectral dataset, and show promise for addressing deforestation and carbon sequestration on a species-specific level

    Mapping urban tree species in a tropical environment using airborne multispectral and LiDAR data

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesAccurate and up-to-date urban tree inventory is an essential resource for the development of strategies towards sustainable urban planning, as well as for effective management and preservation of biodiversity. Trees contribute to thermal comfort within urban centers by lessening heat island effect and have a direct impact in the reduction of air pollution. However, mapping individual trees species normally involves time-consuming field work over large areas or image interpretation performed by specialists. The integration of airborne LiDAR data with high-spatial resolution and multispectral aerial image is an alternative and effective approach to differentiate tree species at the individual crown level. This thesis aims to investigate the potential of such remotely sensed data to discriminate 5 common urban tree species using traditional Machine Learning classifiers (Random Forest, Support Vector Machine, and k-Nearest Neighbors) in the tropical environment of Salvador, Brazil. Vegetation indices and texture information were extracted from multispectral imagery, and LiDAR-derived variables for tree crowns, were tested separately and combined to perform tree species classification applying three different classifiers. Random Forest outperformed the other two classifiers, reaching overall accuracy of 82.5% when using combined multispectral and LiDAR data. The results indicate that (1) given the similarity in spectral signature, multispectral data alone is not sufficient to distinguish tropical tree species (only k-NN classifier could detect all species); (2) height values and intensity of crown returns points were the most relevant LiDAR features, combination of both datasets improved accuracy up to 20%; (3) generation of canopy height model derived from LiDAR point cloud is an effective method to delineate individual tree crowns in a semi-automatic approach

    Fusion of hyperspectral, multispectral, color and 3D point cloud information for the semantic interpretation of urban environments

    Get PDF
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set

    Classification of Expansive Grassland Species in Different Growth Stages Based on Hyperspectral and LiDAR Data

    Get PDF
    Expansive species classification with remote sensing techniques offers great support for botanical field works aimed at detection of their distribution within areas of conservation value and assessment of the threat caused to natural habitats. Large number of spectral bands and high spatial resolution allows for identification of particular species. LiDAR (Light Detection and Ranging) data provide information about areas such as vegetation structure. Because the species differ in terms of features during the growing season, it is important to know when their spectral responses are unique in the background of the surrounding vegetation. The aim of the study was to identify two expansive grass species: Molinia caerulea and Calamagrostis epigejos in the Natura 2000 area in Poland depending on the period and dataset used. Field work was carried out during late spring, summer and early autumn, in parallel with remote sensing data acquisition. Airborne 1-m resolution HySpex images and LiDAR data were used. HySpex images were corrected geometrically and atmospherically before Minimum Noise Fraction (MNF) transformation and vegetation indices calculation. Based on a LiDAR point cloud generated Canopy Height Model, vegetation structure from discrete and full-waveform data and topographic indexes were generated. Classifications were performed using a Random Forest algorithm. The results show post-classification maps and their accuracies: Kappa value and F1 score being the harmonic mean of producer (PA) and user (UA) accuracy, calculated iteratively. Based on these accuracies and botanical knowledge, it was possible to assess the best identification date and dataset used for analysing both species. For M. caerulea the highest median Kappa was 0.85 (F1 = 0.89) in August and for C. epigejos 0.65 (F1 = 0.73) in September. For both species, adding discrete or full-waveform LiDAR data improved the results. We conclude that hyperspectral (HS) and LiDAR airborne data could be useful to id

    Tree species classification from AVIRIS-NG hyperspectral imagery using convolutional neural networks

    Full text link
    This study focuses on the automatic classification of tree species using a three-dimensional convolutional neural network (CNN) based on field-sampled ground reference data, a LiDAR point cloud and AVIRIS-NG airborne hyperspectral remote sensing imagery with 2 m spatial resolution acquired on 14 June 2021. I created a tree species map for my 10.4 km2 study area which is located in the Jurapark Aargau, a Swiss regional park of national interest. I collected ground reference data for six major tree species present in the study area (Quercus robur, Fagus sylvatica, Fraxinus excelsior, Pinus sylvestris, Tilia platyphyllos, total n = 331). To match the sampled ground reference to the AVIRIS-NG 425 band hyperspectral imagery, I delineated individual tree crowns (ITCs) from a canopy height model (CHM) based on LiDAR point cloud data. After matching the ground reference data to the hyperspectral imagery, I split the extracted image patches to training, validation, and testing subsets. The amount of training, validation and testing data was increased by applying image augmentation through rotating, flipping, and changing the brightness of the original input data. The classifier is a CNN trained on the first 32 principal components (PC’s) extracted from AVIRIS-NG data. The CNN uses image patches of 5 × 5 pixels and consists of two convolutional layers and two fully connected layers. The latter of which is responsible for the final classification using the softmax activation function. The results show that the CNN classifier outperforms comparable conventional classification methods. The CNN model is able to predict the correct tree species with an overall accuracy of 70% and an average F1-score of 0.67. A random forest classifier reached an overall accuracy of 67% and an average F1-score of 0.61 while a support-vector machine classified the tree species with an overall accuracy of 66% and an average F1-score of 0.62. This work highlights that CNNs based on imaging spectroscopy data can produce highly accurate high resolution tree species distribution maps based on a relatively small set of training data thanks to the high dimensionality of hyperspectral images and the ability of CNNs to utilize spatial and spectral features of the data. These maps provide valuable input for modelling the distributions of other plant and animal species and ecosystem services. In addition, this work illustrates the importance of direct collaboration with environmental practitioners to ensure user needs are met. This aspect will be evaluated further in future work by assessing how these products are used by environmental practitioners and as input for modelling purposes

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    Mapping Chestnut Stands Using Bi-Temporal VHR Data

    Get PDF
    This study analyzes the potential of very high resolution (VHR) remote sensing images and extended morphological profiles for mapping Chestnut stands on Tenerife Island (Canary Islands, Spain). Regarding their relevance for ecosystem services in the region (cultural and provisioning services) the public sector demand up-to-date information on chestnut and a simple straight-forward approach is presented in this study. We used two VHR WorldView images (March and May 2015) to cover different phenological phases. Moreover, we included spatial information in the classification process by extended morphological profiles (EMPs). Random forest is used for the classification process and we analyzed the impact of the bi-temporal information as well as of the spatial information on the classification accuracies. The detailed accuracy assessment clearly reveals the benefit of bi-temporal VHR WorldView images and spatial information, derived by EMPs, in terms of the mapping accuracy. The bi-temporal classification outperforms or at least performs equally well when compared to the classification accuracies achieved by the mono-temporal data. The inclusion of spatial information by EMPs further increases the classification accuracy by 5% and reduces the quantity and allocation disagreements on the final map. Overall the new proposed classification strategy proves useful for mapping chestnut stands in a heterogeneous and complex landscape, such as the municipality of La Orotava, Tenerife

    An object-based approach for mapping forest structural types based on low-density LiDAR and multispectral imagery

    Full text link
    [EN] Mapping forest structure variables provides important information for the estimation of forest biomass, carbon stocks, pasture suitability or for wildfire risk prevention and control. The optimization of the prediction models of these variables requires an adequate stratification of the forest landscape in order to create specific models for each structural type or strata. This paper aims to propose and validate the use of an object-oriented classification methodology based on low-density LiDAR data (0.5 m−2) available at national level, WorldView-2 and Sentinel-2 multispectral imagery to categorize Mediterranean forests in generic structural types. After preprocessing the data sets, the area was segmented using a multiresolution algorithm, features describing 3D vertical structure were extracted from LiDAR data and spectral and texture features from satellite images. Objects were classified after feature selection in the following structural classes: grasslands, shrubs, forest (without shrubs), mixed forest (trees and shrubs) and dense young forest. Four classification algorithms (C4.5 decision trees, random forest, k-nearest neighbour and support vector machine) were evaluated using cross-validation techniques. The results show that the integration of low-density LiDAR and multispectral imagery provide a set of complementary features that improve the results (90.75% overall accuracy), and the object-oriented classification techniques are efficient for stratification of Mediterranean forest areas in structural- and fuel-related categories. Further work will be focused on the creation and validation of a different prediction model adapted to the various strata.This work was supported by the Spanish Ministerio de Economia y Competitividad and FEDER under [grant number CGL2013-46387-C2-1-R]; Fondo de Garantia Juvenil under [contract number PEJ-2014-A-45358].Ruiz Fernández, LÁ.; Recio Recio, JA.; Crespo-Peremarch, P.; Sapena, M. (2018). An object-based approach for mapping forest structural types based on low-density LiDAR and multispectral imagery. Geocarto International. 33(5):443-457. https://doi.org/10.1080/10106049.2016.1265595S44345733
    corecore