13 research outputs found

    An Object-Based Strategy for Improving the Accuracy of Spatiotemporal Satellite Imagery Fusion for Vegetation-Mapping Applications

    No full text
    Spatiotemporal data fusion is a key technique for generating unified time-series images from various satellite platforms to support the mapping and monitoring of vegetation. However, the high similarity in the reflectance spectrum of different vegetation types brings an enormous challenge in the similar pixel selection procedure of spatiotemporal data fusion, which may lead to considerable uncertainties in the fusion. Here, we propose an object-based spatiotemporal data-fusion framework to replace the original similar pixel selection procedure with an object-restricted method to address this issue. The proposed framework can be applied to any spatiotemporal data-fusion algorithm based on similar pixels. In this study, we modified the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) and the flexible spatiotemporal data-fusion model (FSDAF) using the proposed framework, and evaluated their performances in fusing Sentinel 2 and Landsat 8 images, Landsat 8 and Moderate-resolution Imaging Spectroradiometer (MODIS) images, and Sentinel 2 and MODIS images in a study site covered by grasslands, croplands, coniferous forests, and broadleaf forests. The results show that the proposed object-based framework can improve all three data-fusion algorithms significantly by delineating vegetation boundaries more clearly, and the improvements on FSDAF is the greatest among all three algorithms, which has an average decrease of 2.8% in relative root-mean-square error (rRMSE) in all sensor combinations. Moreover, the improvement on fusing Sentinel 2 and Landsat 8 images is more significant (an average decrease of 2.5% in rRMSE). By using the fused images generated from the proposed object-based framework, we can improve the vegetation mapping result by significantly reducing the pepper-salt effect. We believe that the proposed object-based framework has great potential to be used in generating time-series high-resolution remote-sensing data for vegetation mapping applications

    Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications

    No full text
    Accurate and repeated forest inventory data are critical to understand forest ecosystem processes and manage forest resources. In recent years, unmanned aerial vehicle (UAV)-borne light detection and ranging (lidar) systems have demonstrated effectiveness at deriving forest inventory attributes. However, their high cost has largely prevented them from being used in large-scale forest applications. Here, we developed a very low-cost UAV lidar system that integrates a recently emerged DJI Livox MID40 laser scanner (similar to$600 USD) and evaluated its capability in estimating both individual tree-level (i.e., tree height) and plot-level forest inventory attributes (i.e., canopy cover, gap fraction, and leaf area index (LAI)). Moreover, a comprehensive comparison was conducted between the developed DJI Livox system and four other UAV lidar systems equipped with high-end laser scanners (i.e., RIEGL VUX-1 UAV, RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE). Using these instruments, we surveyed a coniferous forest site and a broadleaved forest site, with tree densities ranging from 500 trees/ha to 3000 trees/ha, with 52 UAV flights at different flying height and speed combinations. The developed DJI Livox MID40 system effectively captured the upper canopy structure and terrain surface information at both forest sites. The estimated individual tree height was highly correlated with field measurements (coniferous site: R-2 = 0.96, root mean squared error/RMSE = 0.59 m; broadleaved site: R-2 = 0.70, RMSE = 1.63 m). The plot-level estimates of canopy cover, gap fraction, and LAI corresponded well with those derived from the high-end RIEGL VUX-1 UAV system but tended to have systematic biases in areas with medium to high canopy densities. Overall, the DJI Livox MID40 system performed comparably to the RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE systems in the coniferous site and to the Velodyne Puck LITE system in the broadleaved forest. Despite its apparent weaknesses of limited sensitivity to low-intensity returns and narrow field of view, we believe that the very low-cost system developed by this study can largely broaden the potential use of UAV lidar in forest inventory applications. This study also provides guidance for the selection of the appropriate UAV lidar system and flight specifications for forest research and management

    Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications

    No full text
    Accurate and repeated forest inventory data are critical to understand forest ecosystem processes and manage forest resources. In recent years, unmanned aerial vehicle (UAV)-borne light detection and ranging (lidar) systems have demonstrated effectiveness at deriving forest inventory attributes. However, their high cost has largely prevented them from being used in large-scale forest applications. Here, we developed a very low-cost UAV lidar system that integrates a recently emerged DJI Livox MID40 laser scanner (~$600 USD) and evaluated its capability in estimating both individual tree-level (i.e., tree height) and plot-level forest inventory attributes (i.e., canopy cover, gap fraction, and leaf area index (LAI)). Moreover, a comprehensive comparison was conducted between the developed DJI Livox system and four other UAV lidar systems equipped with high-end laser scanners (i.e., RIEGL VUX-1 UAV, RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE). Using these instruments, we surveyed a coniferous forest site and a broadleaved forest site, with tree densities ranging from 500 trees/ha to 3000 trees/ha, with 52 UAV flights at different flying height and speed combinations. The developed DJI Livox MID40 system effectively captured the upper canopy structure and terrain surface information at both forest sites. The estimated individual tree height was highly correlated with field measurements (coniferous site: R2 = 0.96, root mean squared error/RMSE = 0.59 m; broadleaved site: R2 = 0.70, RMSE = 1.63 m). The plot-level estimates of canopy cover, gap fraction, and LAI corresponded well with those derived from the high-end RIEGL VUX-1 UAV system but tended to have systematic biases in areas with medium to high canopy densities. Overall, the DJI Livox MID40 system performed comparably to the RIEGL miniVUX-1 UAV, HESAI Pandar40, and Velodyne Puck LITE systems in the coniferous site and to the Velodyne Puck LITE system in the broadleaved forest. Despite its apparent weaknesses of limited sensitivity to low-intensity returns and narrow field of view, we believe that the very low-cost system developed by this study can largely broaden the potential use of UAV lidar in forest inventory applications. This study also provides guidance for the selection of the appropriate UAV lidar system and flight specifications for forest research and management

    UAV-lidar aids automatic intelligent powerline inspection

    No full text
    In recent decades, a substantial increase in electricity demand has put pressure on powerline systems to ensure an uninterrupted power supply. In order to prevent power failures, timely and thorough powerline inspections are needed to detect possible anomalies in advance. In the past few years, the emerging unmanned aerial vehicle (UAV)-mounted sensors (e.g. light detection and ranging/lidar, optical cameras, infrared cameras, and ultraviolet cameras) have provided rich data sources for comprehensive and accurate powerline inspections. A challenge that still hinders the use of UAVs in powerline inspection is that their operation is highly dependent on the pilot?s experience, which may pose risks to the safety of the powerline system and reduce inspection efficiency. An intelligent automatic inspection solution could overcome the limitations of current UAV-based inspection solutions. The main objective of this paper is to provide a contemporary look at the current state-of-the-art UAVbased inspections as well as to discuss a potential lidar-supported intelligent powerline inspection concept. Overall, standardized protocols for lidar-supported intelligent powerline inspections include four data analysis steps, i.e., point cloud classification, key point extraction, route generation, and fault detection. To demonstrate the feasibility of the proposed concept, we implemented a workflow using a dataset of 3536 powerline spans, showing that the inspection of a single powerline span could be completed in 10 min with only one or two technicians. This demonstrates that lidar-supported intelligent inspection can be used to inspect a powerline system with extremely high efficiency and low costs

    The Development and Evaluation of a Backpack LiDAR System for Accurate and Efficient Forest Inventory

    No full text
    Forest inventory holds an essential role in forest management and research, but the existing field inventory methods are highly time-consuming and labor-intensive. Here, we developed a simultaneous localization and mapping-based backpack light detection and ranging (LiDAR) system with dual orthogonal laser scanners and an open-source Python package called Forest3D for efficient and accurate forest inventory applications. Two key forest inventory variables, tree height and diameter at breast height (DBH), were extracted at six study sites with different tree species compositions. In addition, the vertical point density distribution and leaf area density (LAD) were calculated for two complex natural forest sites. The results showed that the backpack LiDAR system together with the Forest3D package accurately estimated the tree height (R-2 = 0.65, RMSE = 1.90 m) and DBH (R-2 = 0.95, RMSE = 0.02 m), which were equivalent to those derived from terrestrial laser scanning (TLS), but with much higher efficiency. The point density of the backpack LiDAR data was higher than or the same as that of the TLS data across all height strata, and the estimated LAD fit well with the TLS estimates (R-2 > 0.92, RMSE = 0.01 m(2)/m(3)). The backpack LiDAR system, along with the Forest3D package, provides an efficient and accurate solution for extracting forest inventory variables, which should be of great interests to forest managers and researchers

    A marker-free method for registering multi-scan terrestrial laser scanning data in forest environments

    No full text
    Terrestrial laser scanning (TLS) has been recognized as an accurate means for non-destructively deriving three-dimensional (3D) forest structural attributes. These attributes include but are not limited to tree height, diameter at breast height, and leaf area density. As such, TLS has become an increasingly important technique in forest inventory practices and forest ecosystem studies. Multiple TLS scans collected at different locations are often involved for a comprehensive characterization of 3D canopy structure of a forest stand. Among which, multi-scan registration is a critical prerequisite. Currently, multi-scan TLS registration in forests is mainly based on a very time-consuming and tedious process of setting up hand-crafted registration targets in the field and manually identifying the common targets between scans from the collected data. In this study, a novel marker-free method that automatically registers multi-scan TLS data is presented. The main principle underlying our method is to identify shaded areas from the raw point cloud of a single TLS scan and to use them as the key features to register multi-scan TLS data. The proposed method is tested with 17 pairs of TLS scans collected in six plots across China with various vegetation characteristics (e.g., vegetation type, height, and understory complexity). Our results showed that the proposed method successfully registered all 17 pairs of TLS scans with equivalent accuracy to the manual registration approach. Moreover, the proposed method eliminates the process of setting up registration targets in the field, manually identifying registration targets from TLS data, and processing raw TLS data to extract individual tree attributes, which brings it the advantages of high efficiency and robustness. It is anticipated that the proposed algorithms can save time and cost of collecting TLS data in forests, and therefore improves the efficiency of TLS forestry applications

    Separating the Structural Components of Maize for Field Phenotyping Using Terrestrial LiDAR Data and Deep Convolutional Neural Networks

    No full text
    Separating structural components is important but also challenging for plant phenotyping and precision agriculture. Light detection and ranging (LiDAR) technology can potentially overcome these difficulties by providing high quality data. However, there are difficulties in automatically classifying and segmenting components of interest. Deep learning can extract complex features, but it is mostly used with images. Here, we propose a voxel-based convolutional neural network (VCNN) for maize stem and leaf classification and segmentation. Maize plants at three different growth stages were scanned with a terrestrial LiDAR and the voxelized LiDAR data were used as inputs. A total of 3000 individual plants (22 004 leaves and 3000 stems) were prepared for training through data augmentation, and 103 maize plants were used to evaluate the accuracy of classification and segmentation at both instance and point levels. The VCNN was compared with traditional clustering methods K-means and density-based spatial clustering of applications with noise), a geometry-based segmentation method, and state-of-the-art deep learning methods (PointNet and PointNet++). The results showed that: 1) at the instance level, the mean accuracy of classification and segmentation (F-score) were 1.00 and 0.96, respectively; 2) at the point level, the mean accuracy of classification and segmentation (F-score) were 0.91 and 0.89, respectively; 3) the VCNN method outperformed traditional clustering methods; and 4) the VCNN was on par with PointNet and PointNet++ in classification, and performed the best in segmentation. The proposed method demonstrated LiDAR's ability to separate structural components for crop phenotyping using deep learning, which can be useful for other fields

    Non-destructive estimation of field maize biomass using terrestrial lidar: an evaluation from plot level to individual leaf level

    No full text
    Background Precision agriculture is an emerging research field that relies on monitoring and managing field variability in phenotypic traits. An important phenotypic trait is biomass, a comprehensive indicator that can reflect crop yields. However, non-destructive biomass estimation at fine levels is unknown and challenging due to the lack of accurate and high-throughput phenotypic data and algorithms. Results In this study, we evaluated the capability of terrestrial light detection and ranging (lidar) data in estimating field maize biomass at the plot, individual plant, leaf group, and individual organ (i.e., individual leaf or stem) levels. The terrestrial lidar data of 59 maize plots with more than 1000 maize plants were collected and used to calculate phenotypes through a deep learning-based pipeline, which were then used to predict maize biomass through simple regression (SR), stepwise multiple regression (SMR), artificial neural network (ANN), and random forest (RF). The results showed that terrestrial lidar data were useful for estimating maize biomass at all levels (at each level, R-2 was greater than 0.80), and biomass estimation at leaf group level was the most precise (R-2 = 0.97, RMSE = 2.22 g) among all four levels. All four regression techniques performed similarly at all levels. However, considering the transferability and interpretability of the model itself, SR is the suggested method for estimating maize biomass from terrestrial lidar-derived phenotypes. Moreover, height-related variables showed to be the most important and robust variables for predicting maize biomass from terrestrial lidar at all levels, and some two-dimensional variables (e.g., leaf area) and three-dimensional variables (e.g., volume) showed great potential as well. Conclusion We believe that this study is a unique effort on evaluating the capability of terrestrial lidar on estimating maize biomass at difference levels, and can provide a useful resource for the selection of the phenotypes and models required to estimate maize biomass in precision agriculture practices
    corecore