3,259 research outputs found

    Forest structure from terrestrial laser scanning – in support of remote sensing calibration/validation and operational inventory

    Get PDF
    Forests are an important part of the natural ecosystem, providing resources such as timber and fuel, performing services such as energy exchange and carbon storage, and presenting risks, such as fire damage and invasive species impacts. Improved characterization of forest structural attributes is desirable, as it could improve our understanding and management of these natural resources. However, the traditional, systematic collection of forest information – dubbed “forest inventory” – is time-consuming, expensive, and coarse when compared to novel 3-D measurement technologies. Remote sensing estimates, on the other hand, provide synoptic coverage, but often fail to capture the fine- scale structural variation of the forest environment. Terrestrial laser scanning (TLS) has demonstrated a potential to address these limitations, but its operational use has remained limited due to unsatisfactory performance characteristics vs. budgetary constraints of many end-users. To address this gap, my dissertation advanced affordable mobile laser scanning capabilities for operational forest structure assessment. We developed geometric reconstruction of forest structure from rapid-scan, low-resolution point cloud data, providing for automatic extraction of standard forest inventory metrics. To augment these results over larger areas, we designed a view-invariant feature descriptor to enable marker-free registration of TLS data pairs, without knowledge of the initial sensor pose. Finally, a graph-theory framework was integrated to perform multi-view registration between a network of disconnected scans, which provided improved assessment of forest inventory variables. This work addresses a major limitation related to the inability of TLS to assess forest structure at an operational scale, and may facilitate improved understanding of the phenomenology of airborne sensing systems, by providing fine-scale reference data with which to interpret the active or passive electromagnetic radiation interactions with forest structure. Outputs are being utilized to provide antecedent science data for NASA’s HyspIRI mission and to support the National Ecological Observatory Network’s (NEON) long-term environmental monitoring initiatives

    Integration of Vessel-Based Hyperspectral Scanning and 3D-Photogrammetry for Mobile Mapping of Steep Coastal Cliffs in the Arctic

    Get PDF
    Remote and extreme regions such as in the Arctic remain a challenging ground for geological mapping and mineral exploration. Coastal cliffs are often the only major well-exposed outcrops, but are mostly not observable by air/spaceborne nadir remote sensing sensors. Current outcrop mapping efforts rely on the interpretation of Terrestrial Laser Scanning and oblique photogrammetry, which have inadequate spectral resolution to allow for detection of subtle lithological differences. This study aims to integrate 3D-photogrammetry with vessel-based hyperspectral imaging to complement geological outcrop models with quantitative information regarding mineral variations and thus enables the differentiation of barren rocks from potential economic ore deposits. We propose an innovative workflow based on: (1) the correction of hyperspectral images by eliminating the distortion effects originating from the periodic movements of the vessel; (2) lithological mapping based on spectral information; and (3) accurate 3D integration of spectral products with photogrammetric terrain data. The method is tested using experimental data acquired from near-vertical cliff sections in two parts of Greenland, in Karrat (Central West) and Søndre Strømfjord (South West). Root-Mean-Square Error of (6.7, 8.4) pixels for Karrat and (3.9, 4.5) pixels for Søndre Strømfjord in X and Y directions demonstrate the geometric accuracy of final 3D products and allow a precise mapping of the targets identified using the hyperspectral data contents. This study highlights the potential of using other operational mobile platforms (e.g., unmanned systems) for regional mineral mapping based on horizontal viewing geometry and multi-source and multi-scale data fusion approaches

    Multi-view alignment with database of features for an improved usage of high-end 3D scanners

    Get PDF
    The usability of high-precision and high-resolution 3D scanners is of crucial importance due to the increasing demand of 3D data in both professional and general-purpose applications. Simplified, intuitive and rapid object modeling requires effective and automated alignment pipelines capable to trace back each independently acquired range image of the scanned object into a common reference system. To this end, we propose a reliable and fast feature-based multiple-view alignment pipeline that allows interactive registration of multiple views according to an unchained acquisition procedure. A robust alignment of each new view is estimated with respect to the previously aligned data through fast extraction, representation and matching of feature points detected in overlapping areas from different views. The proposed pipeline guarantees a highly reliable alignment of dense range image datasets on a variety of objects in few seconds per million of points

    A Double-Deep Spatio-Angular Learning Framework for Light Field based Face Recognition

    Full text link
    Face recognition has attracted increasing attention due to its wide range of applications, but it is still challenging when facing large variations in the biometric data characteristics. Lenslet light field cameras have recently come into prominence to capture rich spatio-angular information, thus offering new possibilities for advanced biometric recognition systems. This paper proposes a double-deep spatio-angular learning framework for light field based face recognition, which is able to learn both texture and angular dynamics in sequence using convolutional representations; this is a novel recognition framework that has never been proposed before for either face recognition or any other visual recognition task. The proposed double-deep learning framework includes a long short-term memory (LSTM) recurrent network whose inputs are VGG-Face descriptions that are computed using a VGG-Very-Deep-16 convolutional neural network (CNN). The VGG-16 network uses different face viewpoints rendered from a full light field image, which are organised as a pseudo-video sequence. A comprehensive set of experiments has been conducted with the IST-EURECOM light field face database, for varied and challenging recognition tasks. Results show that the proposed framework achieves superior face recognition performance when compared to the state-of-the-art.Comment: Submitted to IEEE Transactions on Circuits and Systems for Video Technolog
    • …
    corecore