261 research outputs found

    Individual maize location and height estimation in field from UAV-borne LiDAR and RGB images

    Get PDF
    Crop height is an essential parameter used to monitor overall crop growth, forecast crop yield, and estimate crop biomass in precision agriculture. However, individual maize segmentation is the prerequisite for precision field monitoring, which is a challenging task because the maize stalks are usually occluded by leaves between adjacent plants, especially when they grow up. In this study, we proposed a novel method that combined seedling detection and clustering algorithms to segment individual maize plants from UAV-borne LiDAR and RGB images. As seedlings emerged, the images collected by an RGB camera mounted on a UAV platform were processed and used to generate a digital orthophoto map. Based on this orthophoto, the location of each maize seedling was identified by extra-green detection and morphological filtering. A seed point set was then generated and used as input for the clustering algorithm. The fuzzy C-means clustering algorithm was used to segment individual maize plants. We computed the difference between the maximum elevation value of the LiDAR point cloud and the average elevation value of the bare digital terrain model (DTM) at each corresponding area for individual plant height estimation. The results revealed that our height estimation approach test on two cultivars produced the accuracy with R2 greater than 0.95, with the mean square error (RMSE) of 4.55 cm, 3.04 cm, and 3.29 cm, as well as the mean absolute percentage error (MAPE) of 3.75%, 0.91%, and 0.98% at three different growth stages, respectively. Our approach, utilizing UAV-borne LiDAR and RGB cameras, demonstrated promising performance for estimating maize height and its field position

    Crop plant reconstruction and feature extraction based on 3-D vision

    Get PDF
    3-D imaging is increasingly affordable and offers new possibilities for a more efficient agricul-tural practice with the use of highly advances technological devices. Some reasons contrib-uting to this possibility include the continuous increase in computer processing power, the de-crease in cost and size of electronics, the increase in solid state illumination efficiency and the need for greater knowledge and care of the individual crops. The implementation of 3-D im-aging systems in agriculture is impeded by the economic justification of using expensive de-vices for producing relative low-cost seasonal products. However, this may no longer be true since low-cost 3-D sensors, such as the one used in this work, with advance technical capabili-ties are already available. The aim of this cumulative dissertation was to develop new methodologies to reconstruct the 3-D shape of agricultural environment in order to recognized and quantitatively describe struc-tures, in this case: maize plants, for agricultural applications such as plant breeding and preci-sion farming. To fulfil this aim a comprehensive review of the 3-D imaging systems in agricul-tural applications was done to select a sensor that was affordable and has not been fully inves-tigated in agricultural environments. A low-cost TOF sensor was selected to obtain 3-D data of maize plants and a new adaptive methodology was proposed for point cloud rigid registra-tion and stitching. The resulting maize 3-D point clouds were highly dense and generated in a cost-effective manner. The validation of the methodology showed that the plants were recon-structed with high accuracies and the qualitative analysis showed the visual variability of the plants depending on the 3-D perspective view. The generated point cloud was used to obtain information about the plant parameters (stem position and plant height) in order to quantita-tively describe the plant. The resulting plant stem positions were estimated with an average mean error and standard deviation of 27 mm and 14 mm, respectively. Additionally, meaning-ful information about the plant height profile was also provided, with an average overall mean error of 8.7 mm. Since the maize plants considered in this research were highly heterogeneous in height, some of them had folded leaves and were planted with standard deviations that emulate the real performance of a seeder; it can be said that the experimental maize setup was a difficult scenario. Therefore, a better performance, for both, plant stem position and height estimation could be expected for a maize field in better conditions. Finally, having a 3-D re-construction of the maize plants using a cost-effective sensor, mounted on a small electric-motor-driven robotic platform, means that the cost (either economic, energetic or time) of gen-erating every point in the point cloud is greatly reduced compared with previous researches.Die 3D-Bilderfassung ist zunehmend kostengünstiger geworden und bietet neue Möglichkeiten für eine effizientere landwirtschaftliche Praxis durch den Einsatz hochentwickelter technologischer Geräte. Einige Gründe, die diese ermöglichen, ist das kontinuierliche Wachstum der Computerrechenleistung, die Kostenreduktion und Miniaturisierung der Elektronik, die erhöhte Beleuchtungseffizienz und die Notwendigkeit einer besseren Kenntnis und Pflege der einzelnen Pflanzen. Die Implementierung von 3-D-Sensoren in der Landwirtschaft wird durch die wirtschaftliche Rechtfertigung der Verwendung teurer Geräte zur Herstellung von kostengünstigen Saisonprodukten verhindert. Dies ist jedoch nicht mehr länger der Fall, da kostengünstige 3-D-Sensoren, bereits verfügbar sind. Wie derjenige dier in dieser Arbeit verwendet wurde. Das Ziel dieser kumulativen Dissertation war, neue Methoden für die Visualisierung die 3-D-Form der landwirtschaftlichen Umgebung zu entwickeln, um Strukturen quantitativ zu beschreiben: in diesem Fall Maispflanzen für landwirtschaftliche Anwendungen wie Pflanzenzüchtung und Precision Farming zu erkennen. Damit dieses Ziel erreicht wird, wurde eine umfassende Überprüfung der 3D-Bildgebungssysteme in landwirtschaftlichen Anwendungen durchgeführt, um einen Sensor auszuwählen, der erschwinglich und in landwirtschaftlichen Umgebungen noch nicht ausgiebig getestet wurde. Ein kostengünstiger TOF-Sensor wurde ausgewählt, um 3-D-Daten von Maispflanzen zu erhalten und eine neue adaptive Methodik wurde für die Ausrichtung von Punktwolken vorgeschlagen. Die resultierenden Mais-3-D-Punktwolken hatten eine hohe Punktedichte und waren in einer kosteneffektiven Weise erzeugt worden. Die Validierung der Methodik zeigte, dass die Pflanzen mit hoher Genauigkeit rekonstruiert wurden und die qualitative Analyse die visuelle Variabilität der Pflanzen in Abhängigkeit der 3-D-Perspektive zeigte. Die erzeugte Punktwolke wurde verwendet, um Informationen über die Pflanzenparameter (Stammposition und Pflanzenhöhe) zu erhalten, die die Pflanze quantitativ beschreibt. Die resultierenden Pflanzenstammpositionen wurden mit einem durchschnittlichen mittleren Fehler und einer Standardabweichung von 27 mm bzw. 14 mm berechnet. Zusätzlich wurden aussagekräftige Informationen zum Pflanzenhöhenprofil mit einem durchschnittlichen Gesamtfehler von 8,7 mm bereitgestellt. Da die untersuchten Maispflanzen in der Höhe sehr heterogen waren, hatten einige von ihnen gefaltete Blätter und wurden mit Standardabweichungen gepflanzt, die die tatsächliche Genauigkeit einer Sämaschine nachahmen. Man kann sagen, dass der experimentelle Versuch ein schwieriges Szenario war. Daher könnte für ein Maisfeld unter besseren Bedingungen eine besseres Resultat sowohl für die Pflanzenstammposition als auch für die Höhenschätzung erwartet werden. Schließlich bedeutet eine 3D-Rekonstruktion der Maispflanzen mit einem kostengünstigen Sensor, der auf einer kleinen elektrischen, motorbetriebenen Roboterplattform montiert ist, dass die Kosten (entweder wirtschaftlich, energetisch oder zeitlich) für die Erzeugung jedes Punktes in den Punktwolken im Vergleich zu früheren Untersuchungen stark reduziert werden

    Remote Sensing of Biophysical Parameters

    Get PDF
    Vegetation plays an essential role in the study of the environment through plant respiration and photosynthesis. Therefore, the assessment of the current vegetation status is critical to modeling terrestrial ecosystems and energy cycles. Canopy structure (LAI, fCover, plant height, biomass, leaf angle distribution) and biochemical parameters (leaf pigmentation and water content) have been employed to assess vegetation status and its dynamics at scales ranging from kilometric to decametric spatial resolutions thanks to methods based on remote sensing (RS) data.Optical RS retrieval methods are based on the radiative transfer processes of sunlight in vegetation, determining the amount of radiation that is measured by passive sensors in the visible and infrared channels. The increased availability of active RS (radar and LiDAR) data has fostered their use in many applications for the analysis of land surface properties and processes, thanks to their insensitivity to weather conditions and the ability to exploit rich structural and texture information. Optical and radar data fusion and multi-sensor integration approaches are pressing topics, which could fully exploit the information conveyed by both the optical and microwave parts of the electromagnetic spectrum.This Special Issue reprint reviews the state of the art in biophysical parameters retrieval and its usage in a wide variety of applications (e.g., ecology, carbon cycle, agriculture, forestry and food security)

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    The Burning Bush: Linking LiDAR-derived Shrub Architecture to Flammability

    Get PDF
    Light detection and ranging (LiDAR) and terrestrial laser scanning (TLS) sensors are powerful tools for characterizing vegetation structure and for constructing three-dimensional (3D) models of trees, also known as quantitative structural models (QSM). 3D models and structural traits derived from them provide valuable information for biodiversity conservation, forest management, and fire behavior modeling. However, vegetation studies and 3D modeling methodologies often only focus on the forest canopy, with little attention given to understory vegetation. In particular, 3D structural information of shrubs is limited or not included in fire behavior models. Yet, understory vegetation is an important component of forested ecosystems, and has an essential role in determining fire behavior. In this dissertation, I explored the use of TLS data and quantitative structure models to model shrub architecture in three related studies. In the first study, I present a semi-automated methodology for reconstructing architecturally different shrubs from TLS LiDAR. By investigating shrubs with different architectures and point cloud densities, I showed that occlusion, shrub complexity, and shape greatly affect the accuracy of shrub models. In my second study, I assessed the 3D architectural drivers of understory flammability by evaluating the use of architectural metrics derived from the TLS point cloud and 3D reconstructions of the shrubs. I focused on eight species common in the understory of the fire-prone longleaf pine forest ecosystem of the state of Florida, USA. I found a general tendency for each species to be associated with a unique combination of flammability and architectural traits. Novel shrub architectural traits were found to be complementary to the direct use of TLS data and improved flammability predictions. The inherent complexity of shrub architecture and uncertainty in the TLS point cloud make scaling up from an individual shrub to a plot level a challenging task. Therefore, in my third study, I explored the effects of lidar uncertainty on vegetation parameter prediction accuracy. I developed a practical workflow to create synthetic forest stands with varying densities, which were subsequently scanned with simulated terrestrial lidar. This provided data sets quantitatively similar to those created by real-world LiDAR measurements, but with the advantage of exact knowledge of the forest plot parameters, The results showed that the lidar scan location had a large effect on prediction accuracy. Furthermore, occlusion is strongly related to the sampling density and plot complexity. The results of this study illustrate the potential of non-destructive lidar approaches for quantifying shrub architectural traits. TLS, empirical quantitative structural models, and synthetic models provide valuable insights into shrub structure and fire behavior
    • …
    corecore