3,558 research outputs found

    Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept

    Get PDF
    The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONE’s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver: 1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators; 2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species; 3. A proposal for a cost-effective biodiversity monitoring system. There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme. The issues that we faced were many: 1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset. 2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything. 3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration. 4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output. EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data. EBONE in its initial development, focused on three main indicators covering: (i) the extent and change of habitats of European interest in the context of a general habitat assessment; (ii) abundance and distribution of selected species (birds, butterflies and plants); and (iii) fragmentation of natural and semi-natural areas. For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles: using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples. For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved. Restricted access to European wide species data prevented work on the indicator ‘abundance and distribution of species’. With respect to the indicator ‘fragmentation’, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations

    A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks

    Get PDF
    Analyzing land use and land cover (LULC) using remote sensing (RS) imagery is essential for many environmental and social applications. The increase in availability of RS data has led to the development of new techniques for digital pattern classification. Very recently, deep learning (DL) models have emerged as a powerful solution to approach many machine learning (ML) problems. In particular, convolutional neural networks (CNNs) are currently the state of the art for many image classification tasks. While there exist several promising proposals on the application of CNNs to LULC classification, the validation framework proposed for the comparison of different methods could be improved with the use of a standard validation procedure for ML based on cross-validation and its subsequent statistical analysis. In this paper, we propose a general CNN, with a fixed architecture and parametrization, to achieve high accuracy on LULC classification over RS data from different sources such as radar and hyperspectral. We also present a methodology to perform a rigorous experimental comparison between our proposed DL method and other ML algorithms such as support vector machines, random forests, and k-nearest-neighbors. The analysis carried out demonstrates that the CNN outperforms the rest of techniques, achieving a high level of performance for all the datasets studied, regardless of their different characteristics.Ministerio de Economía y Competitividad TIN2014-55894-C2-1-RMinisterio de Economía y Competitividad TIN2017-88209-C2-2-

    Ash Tree Identification Based on the Integration of Hyperspectral Imagery and High-density Lidar Data

    Get PDF
    Monitoring and management of ash trees has become particularly important in recent years due to the heightened risk of attack from the invasive pest, the emerald ash borer (EAB). However, distinguishing ash from other deciduous trees can be challenging. Both hyperspectral imagery and Light detection and ranging (LiDAR) data are two valuable data sources that are often used for tree species classification. Hyperspectral imagery measures detailed spectral reflectance related to the biochemical properties of vegetation, while LiDAR data measures the three-dimensional structure of tree crowns related to morphological characteristics. Thus, the accuracy of vegetation classification may be improved by combining both techniques. Therefore, the objective of this research is to integrate hyperspectral imagery and LiDAR data for improving ash tree identification. Specifically, the research aims include: 1) using LiDAR data for individual tree crowns segmentation; 2) using hyperspectral imagery for extraction of relative pure crown spectra; 3) fusing hyperspectral and LiDAR data for ash tree identification. It is expected that the classification accuracy of ash trees will be significantly improved with the integration of hyperspectral and LiDAR techniques. Analysis results suggest that, first, 3D crown structures of individual trees can be reconstructed using a set of generalized geometric models which optimally matched LiDAR-derived raster image, and crown widths can be further estimated using tree height and shape-related parameters as independent variables and ground measurement of crown widths as dependent variables. Second, with constrained linear spectral mixture analysis method, the fractions of all materials within a pixel can be extracted, and relative pure crown-scale spectra can be further calculated using illuminated-leaf fraction as weighting factors for tree species classification. Third, both crown shape index (SI) and coefficient of variation (CV) can be extracted from LiDAR data as invariant variables in tree’s life cycle, and improve ash tree identification by integrating with pixel-weighted crown spectra. Therefore, three major contributions of this research have been made in the field of tree species classification:1) the automatic estimation of individual tree crown width from LiDAR data by combining a generalized geometric model and a regression model, 2) the computation of relative pure crown-scale spectral reflectance using a pixel-weighting algorithm for tree species classification, 3) the fusion of shape-related structural features and pixel-weighted crown-scale spectral features for improving of ash tree identification

    Exploring the Potential of Feature Selection Methods in the Classification of Urban Trees Using Field Spectroscopy Data

    Get PDF
    Mapping of vegetation at the species level using hyperspectral satellite data can be effective and accurate because of its high spectral and spatial resolutions that can detect detailed information of a target object. Its wide application, however, not only is restricted by its high cost and large data storage requirements, but its processing is also complicated by challenges of what is known as the Hughes effect. The Hughes effect is where classification accuracy decreases once the number of features or wavelengths passes a certain limit. This study aimed to explore the potential of feature selection methods in the classification of urban trees using field hyperspectral data. We identified the best feature selection method of key wavelengths that respond to the target urban tree species for effective and accurate classification. The study compared the effectiveness of Principal Component Analysis Discriminant Analysis (PCA-DA), Partial Least Squares Discriminant Analysis (PLS-DA) and Guided Regularized Random Forest (GRRF) in feature selection of the key wavelengths for classification of urban trees. The classification performance of Random Forest (RF) and Support Vector Machines (SVM) algorithms were also compared to determine the importance of the key wavelengths selected for the detection of the target urban trees. The feature selection methods managed to reduce the high dimensionality of the hyperspectral data. Both the PCA-DA and PLS-DA selected 10 wavelengths and the GRRF algorithm selected 13 wavelengths from the entire dataset (n = 1523). Most of the key wavelengths were from the short-wave infrared region (1300-2500 nm). SVM outperformed RF in classifying the key wavelengths selected by the feature selection methods. The SVM classifier produced overall accuracy values of 95.3%, 93.3% and 86% using the GRRF, PLS-DA and PCA-DA techniques, respectively, whereas those for the RF classifier were 88.7%, 72% and 56.8%, respectively

    Fusion of hyperspectral, multispectral, color and 3D point cloud information for the semantic interpretation of urban environments

    Get PDF
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set
    corecore