5,639 research outputs found

    Detection of Building Damages in High Resolution SAR Images based on SAR Simulation

    Get PDF

    Extraction of Vegetation Biophysical Structure from Small-Footprint Full-Waveform Lidar Signals

    Get PDF
    The National Ecological Observatory Network (NEON) is a continental scale environmental monitoring initiative tasked with characterizing and understanding ecological phenomenology over a 30-year time frame. To support this mission, NEON collects ground truth measurements, such as organism counts and characterization, carbon flux measurements, etc. To spatially upscale these plot-based measurements, NEON developed an airborne observation platform (AOP), with a high-resolution visible camera, next-generation AVIRIS imaging spectrometer, and a discrete and waveform digitizing light detection and ranging (lidar) system. While visible imaging, imaging spectroscopy, and discrete lidar are relatively mature technologies, our understanding of and associated algorithm development for small-footprint full-waveform lidar are still in early stages of development. This work has as its primary aim to extend small-footprint full-waveform lidar capabilities to assess vegetation biophysical structure. In order to fully exploit waveform lidar capabilities, high fidelity geometric and radio-metric truth data are needed. Forests are structurally and spectrally complex, which makes collecting the necessary truth challenging, if not impossible. We utilize the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, which provides an environment for radiometric simulations, in order to simulate waveform lidar signals. The first step of this research was to build a virtual forest stand based on Harvard Forest inventory data. This scene was used to assess the level of geometric fidelity necessary for small-footprint waveform lidar simulation in broadleaf forests. It was found that leaves have the largest influence on the backscattered signal and that there is little contribution to the signal from the leaf stems and twigs. From this knowledge, a number of additional realistic and abstract virtual “forest” scenes were created to aid studies assessing the ability of waveform lidar systems to extract biophysical phenomenology. We developed an additive model, based on these scenes, for correcting the attenuation in backscattered signal caused by the canopy. The attenuation-corrected waveform, when coupled with estimates of the leaf-level reflectance, provides a measure of the complex within-canopy forest structure. This work has implications for our improved understanding of complex waveform lidar signals in forest environments and, very importantly, takes the research community a significant step closer to assessing fine-scale horizontally- and vertically-explicit leaf area, a holy grail of forest ecology

    Opaque voxel-based tree models for virtual laser scanning in forestry applications

    Get PDF
    Virtual laser scanning (VLS), the simulation of laser scanning in a computer environment, is a useful tool for field campaign planning, acquisition optimisation, and development and sensitivity analyses of algorithms in various disciplines including forestry research. One key to meaningful VLS is a suitable 3D representation of the objects of interest. For VLS of forests, the way trees are constructed influences both the performance and the realism of the simulations. In this contribution, we analyse how well VLS can reproduce scans of individual trees in a forest. Specifically, we examine how different voxel sizes used to create a virtual forest affect point cloud metrics (e.g., height percentiles) and tree metrics (e.g., tree height and crown base height) derived from simulated point clouds. The level of detail in the voxelisation is dependent on the voxel size, which influences the number of voxel cells of the model. A smaller voxel size (i.e., more voxels) increases the computational cost of laser scanning simulations but allows for more detail in the object representation. We present a method that decouples voxel grid resolution from final voxel cube size by scaling voxels to smaller cubes, whose surface area is proportional to estimated normalised local plant area density. Voxel models are created from terrestrial laser scanning point clouds and then virtually scanned in one airborne and one UAV-borne simulation scenario. Using a comprehensive dataset of spatially overlapping terrestrial, UAV-borne and airborne laser scanning field data, we compare metrics derived from simulated point clouds and from real reference point clouds. Compared to voxel cubes of fixed size with the same base grid size, using scaled voxels greatly improves the agreement of simulated and real point cloud metrics and tree metrics. This can be largely attributed to reduced artificial occlusion effects. The scaled voxels better represent gaps in the canopy, allowing for higher and more realistic crown penetration. Similarly high accuracy in the derived metrics can be achieved using regular fixed-sized voxel models with notably finer resolution, e.g., 0.02 m. But this can pose a computational limitation for running simulations over large forest plots due to the ca. 50 times higher number of filled voxels. We conclude that opaque scaled voxel models enable realistic laser scanning simulations in forests and avoid the high computational cost of small fixed-sized voxels

    Semantic Modeling of Outdoor Scenes for the Creation of Virtual Environments and Simulations

    Get PDF
    Efforts from both academia and industry have adopted photogrammetric techniques to generate visually compelling 3D models for the creation of virtual environments and simulations. However, such generated meshes do not contain semantic information for distinguishing between objects. To allow both user- and system-level interaction with the meshes, and enhance the visual acuity of the scene, classifying the generated point clouds and associated meshes is a necessary step. This paper presents a point cloud/mesh classification and segmentation framework. The proposed framework provides a novel way of extracting object information – i.e., individual tree locations and related features while considering the data quality issues presented in a photogrammetric-generated point cloud. A case study has been conducted using data that were collected at the University of Southern California to evaluate the proposed framework

    A Signal processing approach for preprocessing and 3d analysis of airborne small-footprint full waveform lidar data

    Get PDF
    The extraction of structural object metrics from a next generation remote sensing modality, namely waveform light detection and ranging (LiDAR), has garnered increasing interest from the remote sensing research community. However, a number of challenges need to be addressed before structural or 3D vegetation modeling can be accomplished. These include proper processing of complex, often off-nadir waveform signals, extraction of relevant waveform parameters that relate to vegetation structure, and from a quantitative modeling perspective, 3D rendering of a vegetation object from LiDAR waveforms. Three corresponding, broad research objectives therefore were addressed in this dissertation. Firstly, the raw incoming LiDAR waveform typically exhibits a stretched, misaligned, and relatively distorted character. A robust signal preprocessing chain for LiDAR waveform calibration, which includes noise reduction, deconvolution, waveform registration, and angular rectification is presented. This preprocessing chain was validated using both simulated waveform data of high fidelity 3D vegetation models, which were derived via the Digital Imaging and Remote Sensing Image Generation (DIRSIG) modeling environment and real small-footprint waveform LiDAR data, collected by the Carnegie Airborne Observatory (CAO) in a savanna region of South Africa. Results showed that the preprocessing approach significantly increased our ability to recover the temporal signal resolution, and resulted in improved waveform-based vegetation biomass estimation. Secondly, a model for savanna vegetation biomass was derived using the resultant processed waveform data and by decoding the waveform in terms of feature metrics for woody and herbaceous biomass estimation. The results confirmed that small-footprint waveform LiDAR data have significant potential in the case of this application. Finally, a 3D image clustering-based waveform LiDAR inversion model was developed for 1st order (principal branch level) 3D tree reconstruction in both leaf-off and leaf-on conditions. These outputs not only contribute to the visualization of complex tree structures, but also benefit efforts related to the quantification of vegetation structure for natural resource applications from waveform LiDAR data

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato
    corecore