1,051 research outputs found

    A Comparative Analysis of Phytovolume Estimation Methods Based on UAV-Photogrammetry and Multispectral Imagery in a Mediterranean Forest

    Get PDF
    Management and control operations are crucial for preventing forest fires, especially in Mediterranean forest areas with dry climatic periods. One of them is prescribed fires, in which the biomass fuel present in the controlled plot area must be accurately estimated. The most used methods for estimating biomass are time-consuming and demand too much manpower. Unmanned aerial vehicles (UAVs) carrying multispectral sensors can be used to carry out accurate indirect measurements of terrain and vegetation morphology and their radiometric characteristics. Based on the UAV-photogrammetric project products, four estimators of phytovolume were compared in a Mediterranean forest area, all obtained using the difference between a digital surface model (DSM) and a digital terrain model (DTM). The DSM was derived from a UAV-photogrammetric project based on the structure from a motion algorithm. Four different methods for obtaining a DTM were used based on an unclassified dense point cloud produced through a UAV-photogrammetric project (FFU), an unsupervised classified dense point cloud (FFC), a multispectral vegetation index (FMI), and a cloth simulation filter (FCS). Qualitative and quantitative comparisons determined the ability of the phytovolume estimators for vegetation detection and occupied volume. The results show that there are no significant differences in surface vegetation detection between all the pairwise possible comparisons of the four estimators at a 95% confidence level, but FMI presented the best kappa value (0.678) in an error matrix analysis with reference data obtained from photointerpretation and supervised classification. Concerning the accuracy of phytovolume estimation, only FFU and FFC presented differences higher than two standard deviations in a pairwise comparison, and FMI presented the best RMSE (12.3 m) when the estimators were compared to 768 observed data points grouped in four 500 m2 sample plots. The FMI was the best phytovolume estimator of the four compared for low vegetation height in a Mediterranean forest. The use of FMI based on UAV data provides accurate phytovolume estimations that can be applied on several environment management activities, including wildfire prevention. Multitemporal phytovolume estimations based on FMI could help to model the forest resources evolution in a very realistic way

    Exploring Data Mining Techniques for Tree Species Classification Using Co-Registered LiDAR and Hyperspectral Data

    Full text link
    NASA Goddard’s LiDAR, Hyperspectral, and Thermal imager provides co-registered remote sensing data on experimental forests. Data mining methods were used to achieve a final tree species classification accuracy of 68% using a combined LiDAR and hyperspectral dataset, and show promise for addressing deforestation and carbon sequestration on a species-specific level

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Detection and Identification of Camouflaged Targets using Hyperspectral and LiDAR data

    Get PDF
    Camouflaging is the process of merging the target with the background with the aim to reduce/delay its detection. It can be done using different materials/methods such as camouflaging nets, paints. Defence applications often require quick detection of camouflaged targets in a dynamic battlefield scenario. Though HSI data may facilitate detection of camouflaged targets but detection gets complicated due to issues (spectral variability, dimensionality). This paper presents a framework for detection of camouflaged target that allows military analysts to coordinate and utilise the expert knowledge for resolving camouflaged targets using remotely sensed data. Desired camouflaged target (set of three chairs as a target under a camouflaging net) has been resolved in three steps: First, hyperspectral data processing helps to detect the locations of potential camouflaged targets. It narrows down the location of the potential camouflaged targets by detecting camouflaging net using Independent component analysis and spectral matching algorithms. Second, detection and identification have been performed using LiDAR point cloud classification and morphological analysis. HSI processing helps to discard the redundant majority of LiDAR point clouds and support detailed analysis of only the minute portion of the point cloud data the system deems relevant. This facilitates extraction of salient features of the potential camouflaged target. Lastly, the decisions obtained have been fused to infer the identity of the desired targets. The experimental results indicate that the proposed approach may be used to successfully resolve camouflaged target assuming some a priori knowledge about the morphology of targets likely to be present.

    An Analysis of multimodal sensor fusion for target detection in an urban environment

    Get PDF
    This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances

    Fusion of hyperspectral, multispectral, color and 3D point cloud information for the semantic interpretation of urban environments

    Get PDF
    In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set
    • …
    corecore