23 research outputs found

    Automated small-scale plant imaging system

    Get PDF
    For research laboratories, the use of commercial plant phenotyping systems is costly and often do not meet the requirements of the research project. As such, a small-scale plant imaging systems was developed for a biology research group at Washington University in St. Louis. A previous iteration of the device had been prototyped; however, several design requirements were not met, or were not mechanically efficient. Therefore, this paper proposes a second design consisting of a new bridge, trolley and canopy that can be transported easily by the user, and can be made with commercially available parts

    An embedded system for the automated generation of labeled plant images to enable machine learning applications in agriculture

    Get PDF
    A lack of sufficient training data, both in terms of variety and quantity, is often the bottleneck in the development of machine learning (ML) applications in any domain. For agricultural applications, ML-based models designed to perform tasks such as autonomous plant classification will typically be coupled to just one or perhaps a few plant species. As a consequence, each crop-specific task is very likely to require its own specialized training data, and the question of how to serve this need for data now often overshadows the more routine exercise of actually training such models. To tackle this problem, we have developed an embedded robotic system to automatically generate and label large datasets of plant images for ML applications in agriculture. The system can image plants from virtually any angle, thereby ensuring a wide variety of data; and with an imaging rate of up to one image per second, it can produce lableled datasets on the scale of thousands to tens of thousands of images per day. As such, this system offers an important alternative to time- and cost-intensive methods of manual generation and labeling. Furthermore, the use of a uniform background made of blue keying fabric enables additional image processing techniques such as background replacement and plant segmentation. It also helps in the training process, essentially forcing the model to focus on the plant features and eliminating random correlations. To demonstrate the capabilities of our system, we generated a dataset of over 34,000 labeled images, with which we trained an ML-model to distinguish grasses from non-grasses in test data from a variety of sources. We now plan to generate much larger datasets of Canadian crop plants and weeds that will be made publicly available in the hope of further enabling ML applications in the agriculture sector.Comment: 35 pages, 8 figures, Preprint submitted to PLoS On

    Design and Analysis of Booms for Wheeled Mobile Platform for Crop Phenotyping

    Get PDF
    Crop phenotyping is frequently used by breeders and crop scientists to monitor the growth of plants and to relate them to genotypes of plants. Seemingly, this contributes to better crop growth and results in higher yield in solving food insecurity from growing world population. Instead of traditional crop monitoring, which is labor intensive, high-throughput phenotyping (HTP) using ground-based vehicle has several advantages over manual methods. Equipped with advanced sensors, the high-throughput phenotyping platforms quickly, accurately, and automatically, measure and record plant traits, such as appearance, height, and temperature. Although there have been many studies on plant phenotyping, there is still needs for ground-based HTP platform to perform accurate phenotyping on targeted crops (e.g. canola and wheat). Previous studies using ground-based HTP platforms focus primarily on leafy plants rather than densely cultivated crops. Besides, the previous platforms are designed for specific vehicles or sensors, and they are inappropriate for canola or wheat, which are targeted crops of this study. In this research, the main objective is to develop appropriate mechanical structures that are attached to different wheeled mobile platforms for HTP study. Using sensors attached to these mechanical booms, data are collected automatically for several traits such as height, temperature, greenness, and photos. These collected data are compared with manual measured data to evaluate the performance of the system, including suitability of mechanical structure. Three generations of the HTP platform are developed. The 1st and 2nd generation booms with simple structures use C-channel as the key component. While developing these booms, the stress, deformation, and vibration, are assessed with the finite element analysis (FEA). Meanwhile, it is necessary to understand the actual vibration pattern of these relatively long cantilever beams when attached to moving vehicles; however, previous research have little or limited investigation on vibrations influence on long booms in a farm setting. Thus, part of this research investigates how different factors, such as vehicle selection, vehicle speed, sensor locations, and road conditions, influence the boom attached to a farm machine, its vibration, and its effects on sensors performance for phenotyping. Then, an ideal operating conditions for HTP were obtained. The measurements from sensors confirm that the proposed mechanical structures and their ideal operating conditions are fulfilling the requirements for accurate sensor measurements. Finally, the 3rd generation boom/robotic arm featured of a hybrid structure is proposed and analyzed for its kinematics and dynamics suitability. Through the calculation and simulation, it shows that this robotic arm meets the requirements, including long-reach and high-payload capability, while maintaining a lightweight and relatively compact size after folding. Moreover, comparing results from path planning routines between Newton-Euler iterative method and simulations, it illustrates that they correlate well

    Wheat Height Estimation Using LiDAR in Comparison to Ultrasonic Sensor and UAS

    Get PDF
    As one of the key crop traits, plant height is traditionally evaluated manually, which can be slow, laborious and prone to error. Rapid development of remote and proximal sensing technologies in recent years allows plant height to be estimated in more objective and efficient fashions, while research regarding direct comparisons between different height measurement methods seems to be lagging. In this study, a ground-based multi-sensor phenotyping system equipped with ultrasonic sensors and light detection and ranging (LiDAR) was developed. Canopy heights of 100 wheat plots were estimated five times during a season by the ground phenotyping system and an unmanned aircraft system (UAS), and the results were compared to manual measurements. Overall, LiDAR provided the best results, with a root-mean-square error (RMSE) of 0.05 m and an R2 of 0.97. UAS obtained reasonable results with an RMSE of 0.09 m and an R2 of 0.91. Ultrasonic sensors did not perform well due to our static measurement style. In conclusion, we suggest LiDAR and UAS are reliable alternative methods for wheat height evaluation

    Wheat Height Estimation Using LiDAR in Comparison to Ultrasonic Sensor and UAS

    Get PDF
    As one of the key crop traits, plant height is traditionally evaluated manually, which can be slow, laborious and prone to error. Rapid development of remote and proximal sensing technologies in recent years allows plant height to be estimated in more objective and efficient fashions, while research regarding direct comparisons between different height measurement methods seems to be lagging. In this study, a ground-based multi-sensor phenotyping system equipped with ultrasonic sensors and light detection and ranging (LiDAR) was developed. Canopy heights of 100 wheat plots were estimated five times during a season by the ground phenotyping system and an unmanned aircraft system (UAS), and the results were compared to manual measurements. Overall, LiDAR provided the best results, with a root-mean-square error (RMSE) of 0.05 m and an R2 of 0.97. UAS obtained reasonable results with an RMSE of 0.09 m and an R2 of 0.91. Ultrasonic sensors did not perform well due to our static measurement style. In conclusion, we suggest LiDAR and UAS are reliable alternative methods for wheat height evaluation

    Wheat Height Estimation Using LiDAR in Comparison to Ultrasonic Sensor and UAS

    Get PDF
    As one of the key crop traits, plant height is traditionally evaluated manually, which can be slow, laborious and prone to error. Rapid development of remote and proximal sensing technologies in recent years allows plant height to be estimated in more objective and efficient fashions, while research regarding direct comparisons between different height measurement methods seems to be lagging. In this study, a ground-based multi-sensor phenotyping system equipped with ultrasonic sensors and light detection and ranging (LiDAR) was developed. Canopy heights of 100 wheat plots were estimated five times during a season by the ground phenotyping system and an unmanned aircraft system (UAS), and the results were compared to manual measurements. Overall, LiDAR provided the best results, with a root-mean-square error (RMSE) of 0.05 m and an R2 of 0.97. UAS obtained reasonable results with an RMSE of 0.09 m and an R2 of 0.91. Ultrasonic sensors did not perform well due to our static measurement style. In conclusion, we suggest LiDAR and UAS are reliable alternative methods for wheat height evaluation

    A high-throughput, field-based phenotyping technology for tall biomass crops

    Get PDF
    Recent advances in omics technologies have not been accompanied by equally efficient, cost-effective and accurate phenotyping methods required to dissect the genetic architecture of complex traits. Even though high-throughput phenotyping platforms have been developed for controlled environments, field-based aerial and ground technologies have only been designed and deployed for short stature crops. Therefore, we developed and tested Phenobot 1.0, an auto-steered and self-propelled field-based high-throughput phenotyping platform for tall dense canopy crops, such as sorghum (Sorghum bicolor L. Moench). Phenobot 1.0 was equipped with laterally positioned and vertically stacked stereo RGB cameras. Images collected from 307 diverse sorghum lines were reconstructed in 3D for feature extraction. User interfaces were developed and multiple algorithms were evaluated for their accuracy in estimating plant height and stem diameter. Tested feature extraction methods included: i) User-interactive Individual Plant Height Extraction based on dense stereo 3D reconstruction (UsIn-PHe); ii) Automatic Hedge-based Plant Height Extraction (Auto-PHe) based on dense stereo 3D reconstruction; iii) User-interactive Dense Stereo Matching Stem Diameter Extraction (DenS-Di); and iv) User-interactive Image Patch Stereo Matching Stem Diameter Extraction (IPaS-Di). Comparative genome-wide association analysis and ground-truth validation demonstrated that both UsIn-PHe and Auto-PHe were accurate methods to estimate plant height while Auto-PHe had the additional advantage of being a completely automated process. For stem diameter, IPaS-Di generated the most accurate estimates of this biomass-related architectural trait. In summary, our technology was proven robust to obtain ground-based high-throughput plant architecture parameters of sorghum, a tall and densely planted crop species

    In Vivo Human-Like Robotic Phenotyping of Leaf and Stem Traits in Maize and Sorghum in Greenhouse

    Get PDF
    In plant phenotyping, the measurement of morphological, physiological and chemical traits of leaves and stems is needed to investigate and monitor the condition of plants. The manual measurement of these properties is time consuming, tedious, error prone, and laborious. The use of robots is a new approach to accomplish such endeavors, which enables automatic monitoring with minimal human intervention. In this study, two plant phenotyping robotic systems were developed to realize automated measurement of plant leaf properties and stem diameter which could reduce the tediousness of data collection compare to manual measurements. The robotic systems comprised of a four degree of freedom (DOF) robotic manipulator and a Time-of-Flight (TOF) camera. Robotic grippers were developed to integrate an optical fiber cable (coupled to a portable spectrometer) for leaf spectral reflectance measurement, a thermistor for leaf temperature measurement, and a linear potentiometer for stem diameter measurement. An Image processing technique and deep learning method were used to identify grasping points on leaves and stems, respectively. The systems were tested in a greenhouse using maize and sorghum plants. The results from the leaf phenotyping robot experiment showed that leaf temperature measurements by the phenotyping robot were correlated with those measured manually by a human researcher (R2 = 0.58 for maize and 0.63 for sorghum). The leaf spectral measurements by the phenotyping robot predicted leaf chlorophyll, water content and potassium with moderate success (R2 ranged from 0.52 to 0.61), whereas the prediction for leaf nitrogen and phosphorus were poor. The total execution time to grasp and take measurements from one leaf was 35.5±4.4 s for maize and 38.5±5.7 s for sorghum. Furthermore, the test showed that the grasping success rate was 78% for maize and 48% for sorghum. The experimental results from the stem phenotyping robot demonstrated a high correlation between the manual and automated stem diameter measurements (R2 \u3e 0.98). The execution time for stem diameter measurement was 45.3 s. The system could successfully detect and localize, and also grasp the stem for all plants during the experiment. Both robots could decrease the tediousness of collecting phenotypes compare to manual measurements. The phenotyping robots can be useful to complement the traditional image-based high-throughput plant phenotyping in greenhouses by collecting in vivo morphological, physiological, and biochemical trait measurements for plant leaves and stems. Advisors: Yufeng Ge, Santosh Pitl

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G
    corecore