390 research outputs found

    3D Scanning System for Automatic High-Resolution Plant Phenotyping

    Full text link
    Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing structures make plants difficult for three-dimensional (3D) scanning and reconstruction -- two critical steps in automated visual phenotyping. Many current solutions such as laser scanning, structured light, and multiview stereo can struggle to acquire usable 3D models because of limitations in scanning resolution and calibration accuracy. In response, we have developed a fast, low-cost, 3D scanning platform to image plants on a rotating stage with two tilting DSLR cameras centred on the plant. This uses new methods of camera calibration and background removal to achieve high-accuracy 3D reconstruction. We assessed the system's accuracy using a 3D visual hull reconstruction algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum plants and 2 wheat plants across different sets of tilt angles. Scan times ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes (to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas and perimeters of the plastic models were measured manually and compared to measurements from the scanning system: results were within 3-4% of each other. The 3D reconstructions obtained with the scanning system show excellent geometric agreement with all six plant specimens, even plants with thin leaves and fine stems.Comment: 8 papes, DICTA 201

    Approaches to three-dimensional reconstruction of plant shoot topology and geometry

    Get PDF
    There are currently 805 million people classified as chronically undernourished, and yet the World’s population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention – particularly from computer vision researchers – and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements

    A novel 3D imaging system for strawberry phenotyping

    Get PDF
    Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique

    A high-throughput, field-based phenotyping technology for tall biomass crops

    Get PDF
    Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor L. Moench). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included: i) User-interactive Individual Plant Height Extraction based on dense stereo 3D reconstruction (UsIn-PHe); ii) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; iii) User-interactive Dense Stereo Matching Stem Diameter Extraction (DenS-Di); and iv) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species

    3D machine vision system for robotic weeding and plant phenotyping

    Get PDF
    The need for chemical free food is increasing and so is the demand for a larger supply to feed the growing global population. An autonomous weeding system should be capable of differentiating crop plants and weeds to avoid contaminating crops with herbicide or damaging them with mechanical tools. For the plant genetics industry, automated high-throughput phenotyping technology is critical to profiling seedlings at a large scale to facilitate genomic research. This research applied 2D and 3D imaging techniques to develop an innovative crop plant recognition system and a 3D holographic plant phenotyping system. A 3D time-of-flight (ToF) camera was used to develop a crop plant recognition system for broccoli and soybean plants. The developed system overcame the previously unsolved problems caused by occluded canopy and illumination variation. Both 2D and 3D features were extracted and utilized for the plant recognition task. Broccoli and soybean recognition algorithms were developed based on the characteristics of the plants. At field experiments, detection rates of over 88.3% and 91.2% were achieved for broccoli and soybean plants, respectively. The detection algorithm also reached a speed over 30 frame per second (fps), making it applicable for robotic weeding operations. Apart from applying 3D vision for plant recognition, a 3D reconstruction based phenotyping system was also developed for holographic 3D reconstruction and physical trait parameter estimation for corn plants. In this application, precise alignment of multiple 3D views is critical to the 3D reconstruction of a plant. Previously published research highlighted the need for high-throughput, high-accuracy, and low-cost 3D phenotyping systems capable of holographic plant reconstruction and plant morphology related trait characterization. This research contributed to the realization of such a system by integrating a low-cost 2D camera, a low-cost 3D ToF camera, and a chessboard-pattern beacon array to track the 3D camera\u27s position and attitude, thus accomplishing precise 3D point cloud registration from multiple views. Specifically, algorithms of beacon target detection, camera pose tracking, and spatial relationship calibration between 2D and 3D cameras were developed. The phenotypic data obtained by this novel 3D reconstruction based phenotyping system were validated by the experimental data generated by the instrument and manual measurements, showing that the system has achieved measurement accuracy of more than 90% for most cases under an average of less than five seconds processing time per plant
    • …
    corecore