1,211 research outputs found

    A Novel Methodology to Estimate Single-Tree Biophysical Parameters from 3D Digital Imagery Compared to Aerial Laser Scanner Data

    Get PDF
    Airborne laser scanner (ALS) data provide an enhanced capability to remotely map two key variables in forestry: leaf area index (LAI) and tree height (H). Nevertheless, the cost, complexity and accessibility of this technology are not yet suited for meeting the broad demands required for estimating and frequently updating forest data. Here we demonstrate the capability of alternative solutions based on the use of low-cost color infrared (CIR) cameras to estimate tree-level parameters, providing a cost-effective solution for forest inventories. ALS data were acquired with a Leica ALS60 laser scanner and digital aerial imagery (DAI) was acquired with a consumer-grade camera modified for color infrared detection and synchronized with a GPS unit. In this paper we evaluate the generation of a DAI-based canopy height model (CHM) from imagery obtained with low-cost CIR cameras using structure from motion (SfM) and spatial interpolation methods in the context of a complex canopy, as in forestry. Metrics were calculated from the DAI-based CHM and the DAI-based Normalized Difference Vegetation Index (NDVI) for the estimation of tree height and LAI, respectively. Results were compared with the models estimated from ALS point cloud metrics. Field measurements of tree height and effective leaf area index (LAIe) were acquired from a total of 200 and 26 trees, respectively. Comparable accuracies were obtained in the tree height and LAI estimations using ALS and DAI data independently. Tree height estimated from DAI-based metrics (Percentile 90 (P90) and minimum height (MinH)) yielded a coefficient of determination (R2) = 0.71 and a root mean square error (RMSE) = 0.71 m while models derived from ALS-based metrics (P90) yielded an R2 = 0.80 and an RMSE = 0.55 m. The estimation of LAI from DAI-based NDVI using Percentile 99 (P99) yielded an R2 = 0.62 and an RMSE = 0.17 m2/m−2. A comparative analysis of LAI estimation using ALS-based metrics (laser penetration index (LPI), interquartile distance (IQ), and Percentile 30 (P30)) yielded an R2 = 0.75 and an RMSE = 0.14 m2/m−2. The results provide insight on the appropriateness of using cost-effective 3D photo-reconstruction methods for targeting single trees with irregular and heterogeneous tree crowns in complex open-canopy forests. It quantitatively demonstrates that low-cost CIR cameras can be used to estimate both single-tree height and LAI in forest inventories

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    3D Remote Sensing Applications in Forest Ecology: Composition, Structure and Function

    Get PDF
    Dear Colleagues, The composition, structure and function of forest ecosystems are the key features characterizing their ecological properties, and can thus be crucially shaped and changed by various biotic and abiotic factors on multiple spatial scales. The magnitude and extent of these changes in recent decades calls for enhanced mitigation and adaption measures. Remote sensing data and methods are the main complementary sources of up-to-date synoptic and objective information of forest ecology. Due to the inherent 3D nature of forest ecosystems, the analysis of 3D sources of remote sensing data is considered to be most appropriate for recreating the forest’s compositional, structural and functional dynamics. In this Special Issue of Forests, we published a set of state-of-the-art scientific works including experimental studies, methodological developments and model validations, all dealing with the general topic of 3D remote sensing-assisted applications in forest ecology. We showed applications in forest ecology from a broad collection of method and sensor combinations, including fusion schemes. All in all, the studies and their focuses are as broad as a forest’s ecology or the field of remote sensing and, thus, reflect the very diverse usages and directions toward which future research and practice will be directed

    High-throughput phenotyping of plant leaf morphological, physiological, and biochemical traits on multiple scales using optical sensing

    Get PDF
    Acquisition of plant phenotypic information facilitates plant breeding, sheds light on gene action, and can be applied to optimize the quality of agricultural and forestry products. Because leaves often show the fastest responses to external environmental stimuli, leaf phenotypic traits are indicators of plant growth, health, and stress levels. Combination of new imaging sensors, image processing, and data analytics permits measurement over the full life span of plants at high temporal resolution and at several organizational levels from organs to individual plants to field populations of plants. We review the optical sensors and associated data analytics used for measuring morphological, physiological, and biochemical traits of plant leaves on multiple scales. We summarize the characteristics, advantages and limitations of optical sensing and data-processing methods applied in various plant phenotyping scenarios. Finally, we discuss the future prospects of plant leaf phenotyping research. This review aims to help researchers choose appropriate optical sensors and data processing methods to acquire plant leaf phenotypes rapidly, accurately, and cost-effectively

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    Continuous 3D Reconstruction of Plants with Multispectral Information

    Get PDF
    Phenotyping is the process of identifying desirables traits of plants. These traits do not depend only on the plant genome but also on the environment. Imaging techniques can be applied on this field to help relieving the bottleneck due to manual gathering technique. Climate changes represent a challenge to satisfy the increasing demand of food. Phenotyping can relieve this problem. In this work I developed a robust pipeline to build a 3D multispectral model of as a basis for phenotyping

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    Remote sensing technology applications in forestry and REDD+

    Get PDF
    Advances in close-range and remote sensing technologies are driving innovations in forest resource assessments and monitoring on varying scales. Data acquired with airborne and spaceborne platforms provide high(er) spatial resolution, more frequent coverage, and more spectral information. Recent developments in ground-based sensors have advanced 3D measurements, low-cost permanent systems, and community-based monitoring of forests. The UNFCCC REDD+ mechanism has advanced the remote sensing community and the development of forest geospatial products that can be used by countries for the international reporting and national forest monitoring. However, an urgent need remains to better understand the options and limitations of remote and close-range sensing techniques in the field of forest degradation and forest change. Therefore, we invite scientists working on remote sensing technologies, close-range sensing, and field data to contribute to this Special Issue. Topics of interest include: (1) novel remote sensing applications that can meet the needs of forest resource information and REDD+ MRV, (2) case studies of applying remote sensing data for REDD+ MRV, (3) timeseries algorithms and methodologies for forest resource assessment on different spatial scales varying from the tree to the national level, and (4) novel close-range sensing applications that can support sustainable forestry and REDD+ MRV. We particularly welcome submissions on data fusion

    Quantifying the urban forest environment using dense discrete return LiDAR and aerial color imagery for segmentation and object-level biomass assessment

    Get PDF
    The urban forest is becoming increasingly important in the contexts of urban green space and recreation, carbon sequestration and emission offsets, and socio-economic impacts. In addition to aesthetic value, these green spaces remove airborne pollutants, preserve natural resources, and mitigate adverse climate changes, among other benefits. A great deal of attention recently has been paid to urban forest management. However, the comprehensive monitoring of urban vegetation for carbon sequestration and storage is an under-explored research area. Such an assessment of carbon stores often requires information at the individual tree level, necessitating the proper masking of vegetation from the built environment, as well as delineation of individual tree crowns. As an alternative to expensive and time-consuming manual surveys, remote sensing can be used effectively in characterizing the urban vegetation and man-made objects. Many studies in this field have made use of aerial and multispectral/hyperspectral imagery over cities. The emergence of light detection and ranging (LiDAR) technology, however, has provided new impetus to the effort of extracting objects and characterizing their 3D attributes - LiDAR has been used successfully to model buildings and urban trees. However, challenges remain when using such structural information only, and researchers have investigated the use of fusion-based approaches that combine LiDAR and aerial imagery to extract objects, thereby allowing the complementary characteristics of the two modalities to be utilized. In this study, a fusion-based classification method was implemented between high spatial resolution aerial color (RGB) imagery and co-registered LiDAR point clouds to classify urban vegetation and buildings from other urban classes/cover types. Structural, as well as spectral features, were used in the classification method. These features included height, flatness, and the distribution of normal surface vectors from LiDAR data, along with a non-calibrated LiDAR-based vegetation index, derived from combining LiDAR intensity at 1064 nm with the red channel of the RGB imagery. This novel index was dubbed the LiDAR-infused difference vegetation index (LDVI). Classification results indicated good separation between buildings and vegetation, with an overall accuracy of 92% and a kappa statistic of 0.85. A multi-tiered delineation algorithm subsequently was developed to extract individual tree crowns from the identified tree clusters, followed by the application of species-independent biomass models based on LiDAR-derived tree attributes in regression analysis. These LiDAR-based biomass assessments were conducted for individual trees, as well as for clusters of trees, in cases where proper delineation of individual trees was impossible. The detection accuracy of the tree delineation algorithm was 70%. The LiDAR-derived biomass estimates were validated against allometry-based biomass estimates that were computed from field-measured tree data. It was found out that LiDAR-derived tree volume, area, and different distribution parameters of height (e.g., maximum height, mean of height) are important to model biomass. The best biomass model for the tree clusters and the individual trees showed an adjusted R-Squared value of 0.93 and 0.58, respectively. The results of this study showed that the developed fusion-based classification approach using LiDAR and aerial color (RGB) imagery is capable of producing good object detection accuracy. It was concluded that the LDVI can be used in vegetation detection and can act as a substitute for the normalized difference vegetation index (NDVI), when near-infrared multiband imagery is not available. Furthermore, the utility of LiDAR for characterizing the urban forest and associated biomass was proven. This work could have significant impact on the rapid and accurate assessment of urban green spaces and associated carbon monitoring and management
    corecore