114 research outputs found

    Prediction of Early Vigor from Overhead Images of Carinata Plants

    Get PDF
    Breeding more resilient, higher yielding crops is an essential component of ensuring ongoing food security. Early season vigor is signi cantly correlated with yields and is often used as an early indicator of tness in breeding programs. Early vigor can be a useful indicator of the health and strength of plants with bene ts such as improved light interception, reduced surface evaporation, and increased biological yield. However, vigor is challenging to measure analytically and is often rated using subjective visual scoring. This traditional method of breeder scoring becomes cumbersome as the size of breeding programs increase. In this study, we used hand-held cameras tted on gimbals to capture images which were then used as the source for automated vigor scoring. We have employed a novel image metric, the extent of plant growth from the row centerline, as an indicator of vigor. Along with this feature, additional features were used for training a random forest model and a support vector machine, both of which were able to predict expert vigor ratings with an 88:9% and 88% accuracies respectively, providing the potential for more reliable, higher throughput vigor estimates

    Robotic crop row tracking around weeds using cereal-specific features

    Get PDF
    Crop row following is especially challenging in narrow row cereal crops, such as wheat. Separation between plants within a row disappears at an early growth stage, and canopy closure between rows, when leaves from different rows start to occlude each other, occurs three to four months after the crop emerges. Canopy closure makes it challenging to identify separate rows through computer vision as clear lanes become obscured. Cereal crops are grass species and so their leaves have a predictable shape and orientation. We introduce an image processing pipeline which exploits grass shape to identify and track rows. The key observation exploited is that leaf orientations tend to be vertical along rows and horizontal between rows due to the location of the stems within the rows. Adaptive mean-shift clustering on Hough line segments is then used to obtain lane centroids, and followed by a nearest neighbor data association creating lane line candidates in 2D space. Lane parameters are fit with linear regression and a Kalman filter is used for tracking lanes between frames. The method is achieves sub-50 mm accuracy which is sufficient for placing a typical agri-robot’s wheels between real-world, early-growth wheat crop rows to drive between them, as long as the crop is seeded in a wider spacing such as 180 mm row spacing for an 80 mm wheel width robot

    Fleets of robots for environmentally-safe pest control in agriculture

    Get PDF
    Feeding the growing global population requires an annual increase in food production. This requirement suggests an increase in the use of pesticides, which represents an unsustainable chemical load for the environment. To reduce pesticide input and preserve the environment while maintaining the necessary level of food production, the efficiency of relevant processes must be drastically improved. Within this context, this research strived to design, develop, test and assess a new generation of automatic and robotic systems for effective weed and pest control aimed at diminishing the use of agricultural chemical inputs, increasing crop quality and improving the health and safety of production operators. To achieve this overall objective, a fleet of heterogeneous ground and aerial robots was developed and equipped with innovative sensors, enhanced end-effectors and improved decision control algorithms to cover a large variety of agricultural situations. This article describes the scientific and technical objectives, challenges and outcomes achieved in three common crops

    Outdoor computer vision and weed control

    Get PDF

    Automatic plant features recognition using stereo vision for crop monitoring

    Get PDF
    Machine vision and robotic technologies have potential to accurately monitor plant parameters which reflect plant stress and water requirements, for use in farm management decisions. However, autonomous identification of individual plant leaves on a growing plant under natural conditions is a challenging task for vision-guided agricultural robots, due to the complexity of data relating to various stage of growth and ambient environmental conditions. There are numerous machine vision studies that are concerned with describing the shape of leaves that are individually-presented to a camera. The purpose of these studies is to identify plant species, or for the autonomous detection of multiple leaves from small seedlings under greenhouse conditions. Machine vision-based detection of individual leaves and challenges presented by overlapping leaves on a developed plant canopy using depth perception properties under natural outdoor conditions is yet to be reported. Stereo vision has recently emerged for use in a variety of agricultural applications and is expected to provide an accurate method for plant segmentation and identification which can benefit from depth properties and robustness. This thesis presents a plant leaf extraction algorithm using a stereo vision sensor. This algorithm is used on multiple leaf segmentation and overlapping leaves separation using a combination of image features, specifically colour, shape and depth. The separation between the connected and the overlapping leaves relies on the measurement of the discontinuity in depth gradient for the disparity maps. Two techniques have been developed to implement this task based on global and local measurement. A geometrical plane from each segmented leaf can be extracted and used to parameterise a 3D model of the plant image and to measure the inclination angle of each individual leaf. The stem and branch segmentation and counting method was developed based on the vesselness measure and Hough transform technique. Furthermore, a method for reconstructing the segmented parts of hibiscus plants is presented and a 2.5D model is generated for the plant. Experimental tests were conducted with two different selected plants: cotton of different sizes, and hibiscus, in an outdoor environment under varying light conditions. The proposed algorithm was evaluated using 272 cotton and hibiscus plant images. The results show an observed enhancement in leaf detection when utilising depth features, where many leaves in various positions and shapes (single, touching and overlapping) were detected successfully. Depth properties were more effective in separating between occluded and overlapping leaves with a high separation rate of 84% and these can be detected automatically without adding any artificial tags on the leaf boundaries. The results exhibit an acceptable segmentation rate of 78% for individual plant leaves thereby differentiating the leaves from their complex backgrounds and from each other. The results present almost identical performance for both species under various lighting and environmental conditions. For the stem and branch detection algorithm, experimental tests were conducted on 64 colour images of both species under different environmental conditions. The results show higher stem and branch segmentation rates for hibiscus indoor images (82%) compared to hibiscus outdoor images (49.5%) and cotton images (21%). The segmentation and counting of plant features could provide accurate estimation about plant growth parameters which can be beneficial for many agricultural tasks and applications

    Navigation of an Autonomous Differential Drive Robot for Field Scouting in Semi-structured Environments

    Get PDF
    In recent years, the interests of introducing autonomous robots by growers into agriculture fields are rejuvenated due to the ever-increasing labor cost and the recent declining numbers of seasonal workers. The utilization of customized, autonomous agricultural robots has a profound impact on future orchard operations by providing low cost, meticulous inspection. Different sensors have been proven proficient in agrarian navigation including the likes of GPS, inertial, magnetic, rotary encoding, time of flight as well as vision. To compensate for anticipated disturbances, variances and constraints contingent to the outdoor semi-structured environment, a differential style drive vehicle will be implemented as an easily controllable system to conduct tasks such as imaging and sampling. In order to verify the motion control of a robot, custom-designed for strawberry fields, the task is separated into multiple phases to manage the over-bed and cross-bed operation needs. In particular, during the cross-bed segment an elevated strawberry bed will provide distance references utilized in a logic filter and tuned PID algorithm for safe and efficient travel. Due to the significant sources of uncertainty such as wheel slip and the vehicle model, nonlinear robust controllers are designed for the cross-bed motion, purely relying on vision feedback. A simple image filter algorithm was developed for strawberry row detection, in which pixels corresponding to the bed center will be tracked while the vehicle is in controlled motion. This incorporated derivation and formulation of a bounded uncertainty parameter that will be employed in the nonlinear control. Simulation of the entire system was subsequently completed to ensure the control capability before successful validation in multiple commercial farms. It is anticipated that with the developed algorithms the authentication of fully autonomous robotic systems functioning in agricultural crops will provide heightened efficiency of needed costly services; scouting, disease detection, collection, and distribution

    Artificial intelligence and image processing applications for high-throughput phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMitchell L NeilsenThe areas of Computer Vision and Scientific Computing have witnessed rapid growth in the last decade with the fields of industrial robotics, automotive and healthcare acting as the primary vehicles for research and advancement. However, related research in other fields, such as agriculture, remains an understudied problem. This dissertation explores the application of Computer Vision and Scientific Computing in an agricultural domain known as High-throughput Phenotyping (HTP). HTP is the assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits. The dissertation makes the following contributions: The first contribution is the development of algorithms to estimate morphometric traits such as length, width, area, and seed kernel count using 3-D graphics and static image processing, and the extension of existing algorithms for the same. The second contribution is the development of lightweight frameworks to aid in synthetic image dataset creation and image cropping for deep neural networks in HTP. Deep neural networks require a plethora of training data to yield results of the highest quality. However, no such training datasets are readily available for HTP research, especially on seed kernels. The proposed synthetic image generation framework helps generate a profusion of training data at will to train neural networks from a meager samples of seed kernels. Besides requiring large quantities of data, deep neural networks require the input to be a certain size. However, not all available data are in the size required by the deep neural networks. The proposed image cropper helps to resize images without resulting in any distortion, thereby, making image data fit for consumption. The third contribution is the design and analysis of supervised and self-supervised neural network architectures trained on synthetic images to perform the tasks of seed kernel classification, counting and morphometry. In the area of supervised image classification, state-of-the-art neural network models of VGG-16, VGG-19 and ResNet-101 are investigated. A Simple framework for Contrastive Learning of visual Representations (SimCLR) [137], Momentum Contrast (MoCo) [55] and Bootstrap Your Own Latent (BYOL) [123] are leveraged for self-supervised image classification. The instance-based segmentation deep neural network models of Mask R-CNN and YOLO are utilized to perform the tasks of seed kernel classification, segmentation and counting. The results demonstrate the feasibility of deep neural networks for their respective tasks of classification and instance segmentation. In addition to estimating seed kernel count from static images, algorithms that aid in seed kernel counting from videos are proposed and analyzed. Proposed is an algorithm that creates a slit image which can be analyzed to estimate seed count. Upon the creation of the slit image, the video is no longer required to estimate seed count, thereby, significantly lowering the computational resources required for the estimation. The fourth contribution is the development of an end-to-end, automated image capture system for single seed kernel analysis. In addition to estimating length and width from 2-D images, the proposed system estimates the volume of a seed kernel from 2-D images using the technique of volume sculpting. The relative standard deviation of the results produced by the proposed technique is lower (better) than the relative standard deviation of the results produced by volumetric estimation using the ellipsoid slicing technique. The fifth contribution is the development of image processing algorithms to provide feature enhancements to mobile applications to improve upon on-site phenotyping capabilities. Algorithms for two features of high value namely, leaf angle estimation and fractional plant cover estimation are developed. The leaf angle estimation feature estimates the angle between stem and leaf for images captured using mobile phone cameras whereas fractional plant cover is to determine companion plants i.e., plants that are able to co-exist and mutually benefit. The proposed techniques, frameworks and findings lay a solid foundation for future Computer Vision and Scientific Computing research in the domain of agriculture. The contributions are significant since the dissertation not only proposes techniques, but also develops low-cost end-to-end frameworks to leverage the proposed techniques in a scalable fashion

    The development and evaluation of computer vision algorithms for the control of an autonomous horticultural vehicle

    Get PDF
    Economic and environmental pressures have led to a demand for reduced chemical use in crop production. In response to this, precision agriculture techniques have been developed that aim to increase the efficiency of farming operations by more targeted application of chemical treatment. The concept of plant scale husbandry (PSH) has emerged as the logical extreme of precision techniques, where crop and weed plants are treated on an individual basis. To investigate the feasibility of PSH, an autonomous horticultural vehicle has been developed at the Silsoe Research Institute. This thesis describes the development of computer vision algorithms for the experimental vehicle which aim to aid navigation in the field and also allow differential treatment of crop and weed. The algorithm, based upon an extended Kalman filter, exploits the semi-structured nature of the field environment in which the vehicle operates, namely the grid pattern formed by the crop planting. By tracking this grid pattern in the images captured by the vehicles camera as it traverses the field, it is possible to extract information to aid vehicle navigation, such as bearing and offset from the grid of plants. The grid structure can also act as a cue for crop/weed discrimination on the basis of plant position on the ground plane. In addition to tracking the grid pattern, the Kalman filter also estimates the mean distances between the rows of lines and plants in the grid, to cater for variations in the planting procedure. Experiments are described which test the localisation accuracy of the algorithms in offline trials with data captured from the vehicle's camera, and on-line in both a simplified testbed environment and the field. It is found that the algorithms allow safe navigation along the rows of crop. Further experiments demonstrate the crop/weed discrimination performance of the algorithm, both off-line and on-line in a crop treatment experiment performed in the field where all of the crop plants are correctly targeted and no weeds are mistakenly treated

    A Review of the Challenges of Using Deep Learning Algorithms to Support Decision-Making in Agricultural Activities

    Get PDF
    Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important fields in which the application of deep learning still needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other industries. This paper reviews some recent scientific developments in the field of deep learning that have been applied to agriculture, and highlights some challenges and potential solutions using deep learning algorithms in agriculture. The results in this paper indicate that by employing new methods from deep learning, higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested.This work is supported by the R&D Project BioDAgro—Sistema operacional inteligente de informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST-Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering of the University of Beira Interior, Covilhã, Portugal.info:eu-repo/semantics/publishedVersio
    • …
    corecore