20 research outputs found

    Maize Tassel Detection From UAV Imagery Using Deep Learning

    Get PDF
    The timing of flowering plays a critical role in determining the productivity of agricultural crops. If the crops flower too early, the crop would mature before the end of the growing season, losing the opportunity to capture and use large amounts of light energy. If the crops flower too late, the crop may be killed by the change of seasons before it is ready to harvest. Maize flowering is one of the most important periods where even small amounts of stress can significantly alter yield. In this work, we developed and compared two methods for automatic tassel detection based on the imagery collected from an unmanned aerial vehicle, using deep learning models. The first approach was a customized framework for tassel detection based on convolutional neural network (TD-CNN). The other method was a state-of-the-art object detection technique of the faster region-based CNN (Faster R-CNN), serving as baseline detection accuracy. The evaluation criteria for tassel detection were customized to correctly reflect the needs of tassel detection in an agricultural setting. Although detecting thin tassels in the aerial imagery is challenging, our results showed promising accuracy: the TD-CNN had an F1 score of 95.9% and the Faster R-CNN had 97.9% F1 score. More CNN-based model structures can be investigated in the future for improved accuracy, speed, and generalizability on aerial-based tassel detection

    A multiscale point-supervised network for counting maize tassels in the wild

    Get PDF
    Accurate counting of maize tassels is essential for monitoring crop growth and estimating crop yield. Recently, deep-learning-based object detection methods have been used for this purpose, where plant counts are estimated from the number of bounding boxes detected. However, these methods suffer from 2 issues: (a) The scales of maize tassels vary because of image capture from varying distances and crop growth stage; and (b) tassel areas tend to be affected by occlusions or complex backgrounds, making the detection inefficient. In this paper, we propose a multiscale lite attention enhancement network (MLAENet) that uses only point-level annotations (i.e., objects labeled with points) to count maize tassels in the wild. Specifically, the proposed method includes a new multicolumn lite feature extraction module that generates a scale-dependent density map by exploiting multiple dilated convolutions with different rates, capturing rich contextual information at different scales more effectively. In addition, a multifeature enhancement module that integrates an attention strategy is proposed to enable the model to distinguish between tassel areas and their complex backgrounds. Finally, a new up-sampling module, UP-Block, is designed to improve the quality of the estimated density map by automatically suppressing the gridding effect during the up-sampling process. Extensive experiments on 2 publicly available tassel-counting datasets, maize tassels counting and maize tassels counting from unmanned aerial vehicle, demonstrate that the proposed MLAENet achieves marked advantages in counting accuracy and inference speed compared to state-of-the-art methods. The model is publicly available at https://github.com/ShiratsuyuShigure/MLAENet-pytorch/tree/main

    In Vivo Human-Like Robotic Phenotyping of Leaf and Stem Traits in Maize and Sorghum in Greenhouse

    Get PDF
    In plant phenotyping, the measurement of morphological, physiological and chemical traits of leaves and stems is needed to investigate and monitor the condition of plants. The manual measurement of these properties is time consuming, tedious, error prone, and laborious. The use of robots is a new approach to accomplish such endeavors, which enables automatic monitoring with minimal human intervention. In this study, two plant phenotyping robotic systems were developed to realize automated measurement of plant leaf properties and stem diameter which could reduce the tediousness of data collection compare to manual measurements. The robotic systems comprised of a four degree of freedom (DOF) robotic manipulator and a Time-of-Flight (TOF) camera. Robotic grippers were developed to integrate an optical fiber cable (coupled to a portable spectrometer) for leaf spectral reflectance measurement, a thermistor for leaf temperature measurement, and a linear potentiometer for stem diameter measurement. An Image processing technique and deep learning method were used to identify grasping points on leaves and stems, respectively. The systems were tested in a greenhouse using maize and sorghum plants. The results from the leaf phenotyping robot experiment showed that leaf temperature measurements by the phenotyping robot were correlated with those measured manually by a human researcher (R2 = 0.58 for maize and 0.63 for sorghum). The leaf spectral measurements by the phenotyping robot predicted leaf chlorophyll, water content and potassium with moderate success (R2 ranged from 0.52 to 0.61), whereas the prediction for leaf nitrogen and phosphorus were poor. The total execution time to grasp and take measurements from one leaf was 35.5±4.4 s for maize and 38.5±5.7 s for sorghum. Furthermore, the test showed that the grasping success rate was 78% for maize and 48% for sorghum. The experimental results from the stem phenotyping robot demonstrated a high correlation between the manual and automated stem diameter measurements (R2 \u3e 0.98). The execution time for stem diameter measurement was 45.3 s. The system could successfully detect and localize, and also grasp the stem for all plants during the experiment. Both robots could decrease the tediousness of collecting phenotypes compare to manual measurements. The phenotyping robots can be useful to complement the traditional image-based high-throughput plant phenotyping in greenhouses by collecting in vivo morphological, physiological, and biochemical trait measurements for plant leaves and stems. Advisors: Yufeng Ge, Santosh Pitl

    Cereal grain and ear detection with convolutional neural networks

    Get PDF
    High computing power and data availability have made it possible to combine traditional farming with modern machine learning methods. The profitability and environmental friendliness of agriculture can be improved through automatic data processing. For example, applications related to computer vision are enabling automation of various tasks more and more efficiently. Computer vision is a field of study which centers on how computers gain understanding from digital images. A subfield of computer vision, called object detection focuses on mathematical techniques to detect, localize, and classify semantic objects in digital images. This thesis studies object detection methods that are based on convolutional neural networks and how they can be applied in precision agriculture to detect cereal grains and ears. Cultivation of pure-oats poses particular challenges for farmers. The fields need to be inspected regularly to ensure that the crop is not contaminated by other cereals. If the quantity of foreign cereals containing gluten exceeds a certain threshold per kilogram of weight, that crop cannot be used to produce gluten-free products. Detecting foreign grains and ears at the early stages of the growing season ensures the quality of the gluten-free crop.Suuri laskentateho ja tiedon saatavuus ovat mahdollistaneet modernien koneoppimismenetelmien käytön perinteisen maanviljelyn yhteydessä. Maatalouden kannattavuutta ja ympäristöystävällisyyttä voidaan parantaa automaattisen tietojenkäsittelyn avulla. Yhä useampia tehtäviä voidaan automatisoida tehokkaammin esimerkiksi tietokonenäön avulla. Tietokonenäkö on tutkimusala, joka tutkii sitä, miten tietokoneet ymmärtävät digitaalisten kuvien sisältämää informaatiota. Hahmontunnistus on yksi tietokonenäön osa-alueista, jossa keskitytään matemaattisiin tekniikoihin, joiden avulla kuvista havaitaan, paikallistetaan ja luokitellaan hahmoja. Puhdaskauran viljely asettaa viljelijöille erityisiä haasteita. Pellot on tarkistettava säännöllisesti, jolla varmistetaan se, että sato ei ole muiden viljojen saastuttama. Satoa ei voida käyttää gluteenittomien tuotteiden tuottamiseen, jos gluteenia sisältävien viljojen määrä ylittää sallitun rajan painokiloa kohden. Gluteenittoman sadon laatu voidaan varmistaa varhaisessa vaiheessa havaitsemalla vieraiden lajien jyvät ja tähkät

    Monitoring tar spot disease in corn at different canopy and temporal levels using aerial multispectral imaging and machine learning

    Get PDF
    IntroductionTar spot is a high-profile disease, causing various degrees of yield losses on corn (Zea mays L.) in several countries throughout the Americas. Disease symptoms usually appear at the lower canopy in corn fields with a history of tar spot infection, making it difficult to monitor the disease with unmanned aircraft systems (UAS) because of occlusion.MethodsUAS-based multispectral imaging and machine learning were used to monitor tar spot at different canopy and temporal levels and extract epidemiological parameters from multiple treatments. Disease severity was assessed visually at three canopy levels within micro-plots, while aerial images were gathered by UASs equipped with multispectral cameras. Both disease severity and multispectral images were collected from five to eleven time points each year for two years. Image-based features, such as single-band reflectance, vegetation indices (VIs), and their statistics, were extracted from ortho-mosaic images and used as inputs for machine learning to develop disease quantification models.Results and discussionThe developed models showed encouraging performance in estimating disease severity at different canopy levels in both years (coefficient of determination up to 0.93 and Lin’s concordance correlation coefficient up to 0.97). Epidemiological parameters, including initial disease severity or y0 and area under the disease progress curve, were modeled using data derived from multispectral imaging. In addition, results illustrated that digital phenotyping technologies could be used to monitor the onset of tar spot when disease severity is relatively low (< 1%) and evaluate the efficacy of disease management tactics under micro-plot conditions. Further studies are required to apply and validate our methods to large corn fields

    SpikeletFCN: Counting Spikelets from Infield Wheat Crop Images Using Fully Convolutional Networks

    Get PDF
    Currently, crop management through automatic monitoring is growing momentum, but presents various challenges. One key challenge is to quantify yield traits from images captured automatically. Wheat is one of the three major crops in the world with a total demand expected to exceed 850 million tons by 2050. In this paper we attempt estimation of wheat spikelets from high-definition RGB infield images using a fully convolutional model. We propose also the use of transfer learning and segmentation to improve the model. We report cross validated Mean Absolute Error (MAE) and Mean Square Error (MSE) of 53.0, 71.2 respectively on 15 real field images. We produce visualisations which show the good fit of our model to the task. We also concluded that both transfer learning and segmentation lead to a very positive impact for CNN-based models, reducing error by up to 89%, when extracting key traits such as wheat spikelet counts

    Global Wheat Head Detection 2021: an improved dataset for benchmarking wheat head detection methods

    Get PDF
    The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version
    corecore