160 research outputs found

    MHW-PD: a robust rice panicles counting algorithm based on deep learning and multiscale hybrid window

    Get PDF
    In-field assessment of rice panicle yields accurately and automatically has been one of the key ways to realize high-throughput rice breeding in the modern smart farming. However, practical rice fields normally consist of many different, often very small sizes of panicles, particularly when large numbers of panicles are captured in the imagery. In these cases, the integrity of panicle feature is difficult to extract due to the limited panicle original information and substantial clutters caused by heavily compacted leaves and stems, which results in poor counting efficacy. In this paper, we propose a simple, yet effective method termed as Multi-Scale Hybrid Window Panicle Detect (MHW-PD), which focuses on enhance the panicle features to detect and count the large number of small-sized rice panicles in the in-field scene. On the basis of quantifying and analyzing the relationship among the receptive field, the size of input image and the average dimensions of panicles, the MHW-PD gives dynamic strategies for choosing the appropriate feature learning network and constructing adaptive multi-scale hybrid window (MHW), which maximizes the richness of panicle feature. Besides, a fusion algorithm is involved to remove the repeated counting of the broken panicles to get the final panicle number. With extensive experimental results, the MHW-PD has achieved ~87% of panicle counting accuracy; and the counting accuracy just decreases by ~8% when the number of panicles per image increases from 0 to 80, which shows better in stability than all the competing methods adopted in this work. The MHW-PD is demonstrated qualitatively and quantitatively that is able to deal with high density of panicles

    Yielding to the image: how phenotyping reproductive growth can assist crop improvement and production

    Get PDF
    Reproductive organs are the main reason we grow and harvest most plant species as crops, yet they receive less attention from phenotyping due to their complexity and inaccessibility for analysis. This review highlights recent progress towards the quantitative high-throughput phenotyping of reproductive development, focusing on three impactful areas that are pivotal for plant breeding and crop production. First, we look at phenotyping phenology, summarizing the indirect and direct approaches that are available. This is essential for analysis of genotype by environment, and to enable effective management interpretation and agronomy and physiological interventions. Second, we look at pollen development and production, in addition to anther characteristics, these are critical points of vulnerability for yield loss when stress occurs before and during flowering, and are of particular interest for hybrid technology development. Third, we elaborate on phenotyping yield components, indirectly or directly during the season, with a numerical or growth related approach and post-harvest processing. Finally, we summarise the opportunities and challenges ahead for phenotyping reproductive growth and their feasibility and impact, with emphasis on plant breeding applications and targeted yield increases

    Automated crop plant counting from very high-resolution aerial imagery

    Get PDF
    Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.</p

    Detecting Olives with Synthetic or Real Data? Olive the Above

    Full text link
    Modern robotics has enabled the advancement in yield estimation for precision agriculture. However, when applied to the olive industry, the high variation of olive colors and their similarity to the background leaf canopy presents a challenge. Labeling several thousands of very dense olive grove images for segmentation is a labor-intensive task. This paper presents a novel approach to detecting olives without the need to manually label data. In this work, we present the world's first olive detection dataset comprised of synthetic and real olive tree images. This is accomplished by generating an auto-labeled photorealistic 3D model of an olive tree. Its geometry is then simplified for lightweight rendering purposes. In addition, experiments are conducted with a mix of synthetically generated and real images, yielding an improvement of up to 66% compared to when only using a small sample of real data. When access to real, human-labeled data is limited, a combination of mostly synthetic data and a small amount of real data can enhance olive detection

    Image-based deep learning approaches for plant phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceDoina CarageaAbstract The genetic potential of plant traits remains unexplored due to challenges in available phenotyping methods. Deep learning could be used to build automatic tools for identifying, localizing and quantifying plant features based on agricultural images. This dissertation describes the development and evaluation of state-of-the-art deep learning approaches for several plant phenotyping tasks, including characterization of rice root anatomy based on microscopic root cross-section images, estimation of sorghum stomatal density and area based on microscopic images of leaf surfaces, and estimation of the chalkiness in rice exposed to high night temperature based on images of rice grains. For the root anatomy task, anatomical traits such as root, stele and late metaxylem were identified using a deep learning model based on Faster Region-based Convolutional Neural Network (Faster R-CNN) with the pre-trained VGG-16 as backbone. The model was trained on root cross-section images of roots, where the traits of interest were manually annotated as rectangular bounding boxes using the LabelImg tool. The traits were also predicted as rectangular bounding boxes, which were compared with the ground truth bounding boxes in terms of intersection over union metric to evaluate the detection accuracy. The predicted bounding boxes were subsequently used to estimate root and stele diameter, as well as late metaxylem count and average diameter. Experimental results showed that the trained models can accurately detect and quantify anatomical features, and are robust to image variations. It was also observed that using the pre-trained VGG-16 network enabled the training of accurate models with a relatively small number of annotated images, making this approach very attractive in terms of adaptations to new tasks. For estimating sorghum stomatal density and area, a deep learning approach for instance segmentation was used, specifically a Mask Region-based Convolutional Neural Network (Mask R-CNN), which produces pixel-level annotations of stomata objects. The pre-trained ResNet-101 network was used as the backbone of the model in combination with the feature pyramid network (FPN) that enables the model to identify objects at different scales. The Mask R-CNN model was trained on microscopic leaf surface images, where the stomata objects have been manually labeled at pixel level using the VGG Image Annotator tool. The predicted stomata masks were counted, and subsequently used to estimate the stomatal area. Experimental results showed a strong correlation between the predicted counts/stomatal area and the corresponding manually produced values. Furthermore, as for the root anatomy task, this study showed that very accurate results can be obtained with a relatively small number of annotated images. Working on the root anatomy detection and stomatal segmentation tasks showed that manually annotating data, in terms of bounding boxes and especially pixel-level masks, can be a tedious and time-consuming job, even when a relatively small number of annotated images are used for training. To address this challenge, for the task of estimating chalkiness based on images of rice grains exposed to high night temperatures, a weakly supervised approach was used, specifically, an approach based on Gradient-weighted Class Activation Mapping (Grad-CAM). Instead of performing pixel-level segmentation of the chalkiness in rice images, the weakly supervised approach makes use of high-level annotations of images as chalky or not-chalky. A convolutional neural network (e.g., ResNet-101) for binary classifi- cation is trained to distinguish between chalky and not-chalky images, and subsequently the gradients of the chalky class are used to determine a heatmap corresponding to the chalkiness area and also a chalkiness score for a grain. Experimental results on both polished and un- polished rice grains using standard instance classification and segmentation metrics showed that Grad-CAM can accurately identify chalky grains and detect the chalkiness area. The results also showed that the models trained on polished rice cannot be transferred between polished and unpolished rice, suggesting that new models need to be trained and fine-tuned for other types of rice grains and possibly images taken under different conditions. In conclusion, this dissertation first contributes to the field of deep learning by introducing new and challenging tasks that require adaptations of existing deep learning models. It also contributes to the field of agricultural image analysis and plant phenotyping by introducing fully automated high-throughput tools for identifying, localizing and quantifying plant traits that are of significant importance to breeding programs. All the datasets and models trained in this dissertation have been made publicly available to enable the deep learning community to use them and further advance the state-of-the-art on the challenging tasks addressed in this dissertation. The resulting tools have also been made publicly available as web servers to enable the plant breeding community to use them on images collected for tasks similar to those addressed here. Future work will focus on the adaptation of the models used in this dissertation to other similar tasks, and also on the development of similar models for other tasks relevant to the plant breeding community, to the agriculture community at large

    Evaluating Host-Plant Resistance against Sugarcane Aphid (Melanaphis sacchari (Zehntner)) in Sorghum (Sorghum bicolor (L.) Moench)

    Get PDF
    TThe sugarcane aphid (Melanaphis sacchari (Zehntner)) is an established and problematic pest on sorghum (Sorghum bicolor (L.) Moench) in the United States. The virulent pest on sorghum was initially identified in Southeast Texas and significantly affects production. Heavy infestation will decrease yield and quality of grain and forage sorghum. The aphid’s sticky honeydew secretions cause harvest losses and clogging of combines. Using artificial and natural infestations, 500 lines from Texas A&M AgriLife Research were evaluated, including mechanisms of resistance and phenotypic traits useful for breeding. Resistant lines A/B.Tx3408 and A/B.Tx3409 were identified, and released to the public in 2016. Grain and forage sorghum hybrids produced using resistant lines also exhibited resistance. The resistant lines and hybrids produced from resistant sources were subsequently evaluated for their relative agronomic and breeding value. The performance of resistant hybrids was better than susceptible hybrids under sugarcane aphid infestation. The mechanisms of resistance were identified as antibiosis and antixenosis (non-preference). Some phenotypic traits also influenced aphid damage. Further investigation into the phenotypic, biochemical and genotypic traits responsible for conditioning sugarcane aphid resistance is planned through heritability and quantitative trait locus (QTL) mapping studies. This will enable more efficient selection of genotypes that maintain grain and/or forage yield and quality when subjected to aphid infestation
    • …
    corecore