8 research outputs found

    Automated Stem Angle Determination for Temporal Plant Phenotyping Analysis

    Get PDF
    Image-based plant phenotyping analysis refers to the monitoring and quantification of phenotyping traits by analyzing images of the plants captured by different types of cameras at regular intervals in a controlled environment. Extracting meaningful phenotypes for temporal phenotyping analysis by considering individual parts of a plant, e.g., leaves and stem, using computer-vision based techniques remains a critical bottleneck due to constantly in- creasing complexity in plant architecture with variations in self-occlusions and phyllotaxy. The paper introduces an algorithm to compute the stem angle, a potential measure for plants’ susceptibility to lodging, i.e., the bending of stem of the plant. Annual yield losses due to stem lodging in the U.S. range between 5 and 25%. In addition to outright yield losses, grain quality may also decline as a result of stem lodging. The algorithm to compute stem angle involves the identification of leaf-tips and leaf-junctions based on a graph theoretic approach. The efficacy of the proposed method is demonstrated based on experimental analysis on a publicly available dataset called Panicoid Phenomap-1. A time-series clustering analysis is also performed on the values of stem angles for a significant time interval during vegetative stage life cycle of the maize plants. This analysis effectively summarizes the temporal patterns of the stem angles into three main groups, which provides further insight into genotype specific behavior of the plants. A comparison of genotypic purity using time series analysis establishes that the temporal variation of the stem angles is likely to be regulated by genetic variation under similar environmental conditions

    Development of an Autonomous Indoor Phenotyping Robot

    Get PDF
    In order to fully understand the interaction between phenotype and genotype x environment to improve crop performance, a large amount of phenotypic data is needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, an indoor rover has been developed, which can accurately and autonomously move between and inside growth chambers. The system uses mecanum wheels, magnetic tape guidance, a Universal Robots UR 10 robot manipulator, and a Microsoft Kinect v2 3D sensor to position various sensors in this constrained environment. Integration of the motor controllers, robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper shows the system architecture and some preliminary results of the system, as tested using a life-sized growth chamber mock-up. A comparison of using raw camera coordinates data and using KinectFusion data is presented. The results suggest that the KinectFusion pose estimation is fairly accurate, only decreasing accuracy by a few millimeters at distances of roughly 0.8 meter

    Development of a mobile robotic phenotyping system for growth chamber-based studies of genotype x environment interactions

    Get PDF
    In order to fully understand the interaction between phenotype and genotype x environment to improve crop performance, a large amount of phenotypic data is needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. This thesis presents two key portions of the development of the Enviratron rover, a robotic system that aims to autonomously collect the labor-intensive data required to perform experiments in this area. The rover is part of a larger project which will track plant growth in multiple environments. The first aspects of the robot discussed in this thesis is the system hardware and main, or whole-chamber, imaging system. Semi-autonomous behavior is currently achieved, and the system performance in probing leaves is quantified and discussed. In contrast to existing systems, the rover can follow magnetic tape along all four directions (front, left, back, right), and uses a Microsoft Kinect V2 mounted on the end-effector of a robotic arm to position a threaded rod, simulating future sensors such as fluorimeter and Raman Spectrometer, at a desired position and orientation. Advantages of the tape following include being able to reliably move both between chambers and within a chamber regardless of dust and lighting conditions. The robot arm and Kinect system is unique in its speed at reconstructing an (filtered) environment when combined with its accuracy at positioning sensors. A comparison of using raw camera coordinates data and using KinectFusion data is presented. The results suggest that the KinectFusion pose estimation is fairly accurate, only decreasing accuracy by a few millimeters at distances of roughly 0.8 meter. The system can consistently position sensors to within 4 cm of the goal, and often within 3 cm. The system is shown to be accurate enough to position sensors to ñ 9 degrees of a desired orientation, although currently this accuracy requires human input to fully utilize the Kinect’s feedback. The second aspect of the robot presented in this thesis is a framework for generating collision-free robot arm motion within the chamber. This framework uses feedback from the Kinect sensor and is based on the Probabilistic Roadmaps (PRM) technique, which involves creating a graph of collision-free nodes and edges, and then searching for an acceptable path. The variant presented uses a dilated, down-sampled, KinectFusion as input for rapid collision checking, effectively representing the environment as a discretized grid and representing the robot arm as a collection of spheres. The approach combines many desirable characteristics of previous PRM methods and other collision-avoidance schemes, and is aimed at providing a reliable, rapidly-constructed, highly-connected roadmap which can be queried multiple times in a static environment, such as a growth chamber or a greenhouse. In a sample plant configuration with several of the most challenging practical goal poses, it is shown to create a roadmap in an average time of 32.5 seconds. One key feature is that nodes are added near the goal during each query, in order to increase accuracy at the expense of increased query time. A completed graph is searched for an optimal path connecting nodes near the starting pose and the desired end pose. The fastest graph search studied was an implementation of the A* algorithm. Queries using this framework took an average time of 0.46 seconds. The average distance between the attained pose and the desired location was 2.7 cm. Average distance C-space between the attained pose and the desired location was 3.65 degrees. The research suggests that the robotic framework presented has the potential to fulfill the main hardware and motion requirements of an autonomous indoor phenotyping robot, and can generate desired collision-free robot arm motion

    Development of a Mobile Robotic Phenotyping System for Growth Chamber-based Studies of Genotype x Environment Interactions

    Get PDF
    To increase understanding of the interaction between phenotype and genotype x environment to improve crop performance, large amounts of phenotypic data are needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, a Mecanum-wheeled, magnetic-tape-following indoor rover has been developed to accurately and autonomously move between and inside growth chambers. Integration of the motor controllers, a robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper reports the system architecture and some preliminary results of the system

    Prediction of Early Vigor from Overhead Images of Carinata Plants

    Get PDF
    Breeding more resilient, higher yielding crops is an essential component of ensuring ongoing food security. Early season vigor is signi cantly correlated with yields and is often used as an early indicator of tness in breeding programs. Early vigor can be a useful indicator of the health and strength of plants with bene ts such as improved light interception, reduced surface evaporation, and increased biological yield. However, vigor is challenging to measure analytically and is often rated using subjective visual scoring. This traditional method of breeder scoring becomes cumbersome as the size of breeding programs increase. In this study, we used hand-held cameras tted on gimbals to capture images which were then used as the source for automated vigor scoring. We have employed a novel image metric, the extent of plant growth from the row centerline, as an indicator of vigor. Along with this feature, additional features were used for training a random forest model and a support vector machine, both of which were able to predict expert vigor ratings with an 88:9% and 88% accuracies respectively, providing the potential for more reliable, higher throughput vigor estimates

    A comprehensive review of fruit and vegetable classification techniques

    Get PDF
    Recent advancements in computer vision have enabled wide-ranging applications in every field of life. One such application area is fresh produce classification, but the classification of fruit and vegetable has proven to be a complex problem and needs to be further developed. Fruit and vegetable classification presents significant challenges due to interclass similarities and irregular intraclass characteristics. Selection of appropriate data acquisition sensors and feature representation approach is also crucial due to the huge diversity of the field. Fruit and vegetable classification methods have been developed for quality assessment and robotic harvesting but the current state-of-the-art has been developed for limited classes and small datasets. The problem is of a multi-dimensional nature and offers significantly hyperdimensional features, which is one of the major challenges with current machine learning approaches. Substantial research has been conducted for the design and analysis of classifiers for hyperdimensional features which require significant computational power to optimise with such features. In recent years numerous machine learning techniques for example, Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Decision Trees, Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN) have been exploited with many different feature description methods for fruit and vegetable classification in many real-life applications. This paper presents a critical comparison of different state-of-the-art computer vision methods proposed by researchers for classifying fruit and vegetable

    Modeling leaf growth of rosette plants using infrared stereo image sequences

    Get PDF
    In this paper, we present a novel multi-level procedure for finding and tracking leaves of a rosette plant, in our case up to 3 weeks old tobacco plants, during early growth from infrared-image sequences. This allows measuring important plant parameters, e.g. leaf growth rates, in an automatic and non-invasive manner. The procedure consists of three main stages: preprocessing, leaf segmentation, and leaf tracking. Leaf-shape models are applied to improve leaf segmentation, and further used for measuring leaf sizes and handling occlusions. Leaves typically grow radially away from the stem, a property that is exploited in our method, reducing the dimensionality of the tracking task. We successfully tested the method on infrared image sequences showing the growth of tobacco-plant seedlings up to an age of about 30 days, which allows measuring relevant plant growth parameters such as leaf growth rate. By robustly fitting a suitably modified autocatalytic growth model to all growth curves from plants under the same treatment, average plant growth models could be derived. Future applications of the method include plant-growth monitoring for optimizing plant production in green houses or plant phenotyping for plant research
    corecore