11 research outputs found
Assessing plant performance in the Enviratron
Background: Assessing the impact of the environment on plant performance requires growing plants under controlled environmental conditions. Plant phenotypes are a product of genotype Ă— environment (G Ă— E), and the Enviratron at Iowa State University is a facility for testing under controlled conditions the effects of the environment on plant growth and development. Crop plants (including maize) can be grown to maturity in the Enviratron, and the performance of plants under different environmental conditions can be monitored 24 h per day, 7 days per week throughout the growth cycle.
Results: The Enviratron is an array of custom-designed plant growth chambers that simulate different environmental conditions coupled with precise sensor-based phenotypic measurements carried out by a robotic rover. The rover has workflow instructions to periodically visit plants growing in the different chambers where it measures various growth and physiological parameters. The rover consists of an unmanned ground vehicle, an industrial robotic arm and an array of sensors including RGB, visible and near infrared (VNIR) hyperspectral, thermal, and time-of-flight (ToF) cameras, laser profilometer and pulse-amplitude modulated (PAM) fluorometer. The sensors are autonomously positioned for detecting leaves in the plant canopy, collecting various physiological measurements based on computer vision algorithms and planning motion via “eye-in-hand” movement control of the robotic arm. In particular, the automated leaf probing function that allows the precise placement of sensor probes on leaf surfaces presents a unique advantage of the Enviratron system over other types of plant phenotyping systems.
Conclusions: The Enviratron offers a new level of control over plant growth parameters and optimizes positioning and timing of sensor-based phenotypic measurements. Plant phenotypes in the Enviratron are measured in situ—in that the rover takes sensors to the plants rather than moving plants to the sensors
Crowdsourcing Image Analysis for Plant Phenomics to Generate Ground Truth Data for Machine Learning
The accuracy of machine learning tasks critically depends on high quality ground truth data. Therefore, in many cases, producing good ground truth data typically involves trained professionals; however, this can be costly in time, effort, and money. Here we explore the use of crowdsourcing to generate a large number of training data of good quality. We explore an image analysis task involving the segmentation of corn tassels from images taken in a field setting. We investigate the accuracy, speed and other quality metrics when this task is performed by students for academic credit, Amazon MTurk workers, and Master Amazon MTurk workers. We conclude that the Amazon MTurk and Master Mturk workers perform significantly better than the for-credit students, but with no significant difference between the two MTurk worker types. Furthermore, the quality of the segmentation produced by Amazon MTurk workers rivals that of an expert worker. We provide best practices to assess the quality of ground truth data, and to compare data quality produced by different sources. We conclude that properly managed crowdsourcing can be used to establish large volumes of viable ground truth data at a low cost and high quality, especially in the context of high throughput plant phenotyping. We also provide several metrics for assessing the quality of the generated datasets.This article is published as Zhou, Naihui, Zachary D. Siegel, Scott Zarecor, Nigel Lee, Darwin A. Campbell, Carson M. Andorf, Dan Nettleton et al. "Crowdsourcing image analysis for plant phenomics to generate ground truth data for machine learning." PLoS computational biology 14, no. 7 (2018): e1006337. doi: 10.1371/journal.pcbi.1006337.</p
Crowdsourcing image analysis for plant phenomics to generate ground truth data for machine learning
<div><p>The accuracy of machine learning tasks critically depends on high quality ground truth data. Therefore, in many cases, producing good ground truth data typically involves trained professionals; however, this can be costly in time, effort, and money. Here we explore the use of crowdsourcing to generate a large number of training data of good quality. We explore an image analysis task involving the segmentation of corn tassels from images taken in a field setting. We investigate the accuracy, speed and other quality metrics when this task is performed by students for academic credit, Amazon MTurk workers, and Master Amazon MTurk workers. We conclude that the Amazon MTurk and Master Mturk workers perform significantly better than the for-credit students, but with no significant difference between the two MTurk worker types. Furthermore, the quality of the segmentation produced by Amazon MTurk workers rivals that of an expert worker. We provide best practices to assess the quality of ground truth data, and to compare data quality produced by different sources. We conclude that properly managed crowdsourcing can be used to establish large volumes of viable ground truth data at a low cost and high quality, especially in the context of high throughput plant phenotyping. We also provide several metrics for assessing the quality of the generated datasets.</p></div
Assessing plant performance in the Enviratron
Background: Assessing the impact of the environment on plant performance requires growing plants under controlled environmental conditions. Plant phenotypes are a product of genotype Ă— environment (G Ă— E), and the Enviratron at Iowa State University is a facility for testing under controlled conditions the effects of the environment on plant growth and development. Crop plants (including maize) can be grown to maturity in the Enviratron, and the performance of plants under different environmental conditions can be monitored 24 h per day, 7 days per week throughout the growth cycle.
Results: The Enviratron is an array of custom-designed plant growth chambers that simulate different environmental conditions coupled with precise sensor-based phenotypic measurements carried out by a robotic rover. The rover has workflow instructions to periodically visit plants growing in the different chambers where it measures various growth and physiological parameters. The rover consists of an unmanned ground vehicle, an industrial robotic arm and an array of sensors including RGB, visible and near infrared (VNIR) hyperspectral, thermal, and time-of-flight (ToF) cameras, laser profilometer and pulse-amplitude modulated (PAM) fluorometer. The sensors are autonomously positioned for detecting leaves in the plant canopy, collecting various physiological measurements based on computer vision algorithms and planning motion via “eye-in-hand” movement control of the robotic arm. In particular, the automated leaf probing function that allows the precise placement of sensor probes on leaf surfaces presents a unique advantage of the Enviratron system over other types of plant phenotyping systems.
Conclusions: The Enviratron offers a new level of control over plant growth parameters and optimizes positioning and timing of sensor-based phenotypic measurements. Plant phenotypes in the Enviratron are measured in situ—in that the rover takes sensors to the plants rather than moving plants to the sensors.This article is published as Bao, Yin, Scott Zarecor, Dylan Shah, Taylor Tuel, Darwin A. Campbell, Antony VE Chapman, David Imberti, Daniel Kiekhaefer, Henry Imberti, Thomas Lübberstedt, Yanhai Yin, Dan Nettleton, Carolyn J. Lawrence‑Dill, Steven A. Whitham, Lie Tang, and Stephen H. Howell. "Assessing plant performance in the Enviratron." Plant Methods 15, no. 1 (2019): 117. DOI: 10.1186/s13007-019-0504-y. Posted with permission.</p
Drawing boxes around tassels.
<p>Left: Sample participant-drawn boxes. Right: The Red box is the gold standard box and black is a participant-drawn box.</p
Parameter estimates from the ANOVA with master MTurk group as baseline.
<p>Parameter estimates from the ANOVA with master MTurk group as baseline.</p
Both accuracy and time per question change as participants progress through the task.
<p>A: Time spent in log scale as a function of image order. B: Mean F value decreases very slightly over the survey process.</p
Best Linear Unbiased Predictors for images.
<p>BLUPs are calculated in both analyses for <i>F</i><sub><i>mean</i></sub> and time in log scale. Color represents image difficulty determined by expert.</p
Example image used during training to demonstrate correct placement of bounding boxes around tassels.
<p>Example image used during training to demonstrate correct placement of bounding boxes around tassels.</p