14 research outputs found

    An Exploration of Deep-Learning Based Phenotypic Analysis to Detect Spike Regions in Field Conditions for UK Bread Wheat

    Get PDF
    Wheat is one of the major crops in the world, with a global demand expected to reach 850 million tons by 2050 that is clearly outpacing current supply. The continual pressure to sustain wheat yield due to the world’s growing population under fluctuating climate conditions requires breeders to increase yield and yield stability across environments. We are working to integrate deep learning into field-based phenotypic analysis to assist breeders in this endeavour. We have utilised wheat images collected by distributed CropQuant phenotyping workstations deployed for multiyear field experiments of UK bread wheat varieties. Based on these image series, we have developed a deep-learning based analysis pipeline to segment spike regions from complicated backgrounds. As a first step towards robust measurement of key yield traits in the field, we present a promising approach that employ Fully Convolutional Network (FCN) to perform semantic segmentation of images to segment wheat spike regions. We also demonstrate the benefits of transfer learning through the use of parameters obtained from other image datasets. We found that the FCN architecture had achieved a Mean classification Accuracy (MA) >82% on validation data and >76% on test data and Mean Intersection over Union value (MIoU) >73% on validation data and and >64% on test datasets. Through this phenomics research, we trust our attempt is likely to form a sound foundation for extracting key yield-related traits such as spikes per unit area and spikelet number per spike, which can be used to assist yield-focused wheat breeding objectives in near future

    Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review

    Get PDF
    Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectification. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to fit into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-specific irradiance systems, and so on. This review briefly highlights the advantages of automatic detection of plant diseases to the growers

    Fruit sizing using AI: A review of methods and challenges

    Get PDF
    Fruit size at harvest is an economically important variable for high-quality table fruit production in orchards and vineyards. In addition, knowing the number and size of the fruit on the tree is essential in the framework of precise production, harvest, and postharvest management. A prerequisite for analysis of fruit in a real-world environment is the detection and segmentation from background signal. In the last five years, deep learning convolutional neural network have become the standard method for automatic fruit detection, achieving F1-scores higher than 90 %, as well as real-time processing speeds. At the same time, different methods have been developed for, mainly, fruit size and, more rarely, fruit maturity estimation from 2D images and 3D point clouds. These sizing methods are focused on a few species like grape, apple, citrus, and mango, resulting in mean absolute error values of less than 4 mm in apple fruit. This review provides an overview of the most recent methodologies developed for in-field fruit detection/counting and sizing as well as few upcoming examples of maturity estimation. Challenges, such as sensor fusion, highly varying lighting conditions, occlusions in the canopy, shortage of public fruit datasets, and opportunities for research transfer, are discussed.This work was partly funded by the Department of Research and Universities of the Generalitat de Catalunya (grants 2017 SGR 646 and 2021 LLAV 00088) and by the Spanish Ministry of Science and Innovation / AEI/10.13039/501100011033 / FEDER (grants RTI2018-094222-B-I00 [PAgFRUIT project] and PID2021-126648OB-I00 [PAgPROTECT project]). The Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya and European Social Fund (ESF) are also thanked for financing Juan Carlos Miranda’s pre-doctoral fellowship (2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU.info:eu-repo/semantics/publishedVersio

    Deep learning sensor fusion in plant water stress assessment: A comprehensive review

    Get PDF
    Water stress is one of the major challenges to food security, causing a significant economic loss for the nation as well for growers. Accurate assessment of water stress will enhance agricultural productivity through optimization of plant water usage, maximizing plant breeding strategies, and preventing forest wildfire for better ecosystem management. Recent advancements in sensor technologies have enabled high-throughput, non-contact, and cost-efficient plant water stress assessment through intelligence system modeling. The advanced deep learning sensor fusion technique has been reported to improve the performance of the machine learning application for processing the collected sensory data. This paper extensively reviews the state-of-the-art methods for plant water stress assessment that utilized the deep learning sensor fusion approach in their application, together with future prospects and challenges of the application domain. Notably, 37 deep learning solutions fell under six main areas, namely soil moisture estimation, soil water modelling, evapotranspiration estimation, evapotranspiration forecasting, plant water status estimation and plant water stress identification. Basically, there are eight deep learning solutions compiled for the 3D-dimensional data and plant varieties challenge, including unbalanced data that occurred due to isohydric plants, and the effect of variations that occur within the same species but cultivated from different locations

    Artificial Neural Networks in Agriculture

    Get PDF
    Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible

    Classification and severity prediction of maize leaf diseases using Deep Learning CNN approaches

    Get PDF
    No key words availableMaize (zea mays) is the staple food of Southern Africa and most of the African regions. This staple food has been threatened by a lot of diseases in terms of its yield and existence. Within this domain, it is important for researchers to develop technologies that will ensure its average yield by classifying or predicting such diseases at an early stage. The prediction, and to some degree classifying, of such diseases, with much reference to Southern Africa staple food (Maize), will result in a reduction of hunger and increased affordability among families. Reference is made to the three diseases which are Common Rust (CR), Grey Leaf Spot (GLS) and Northern Corn Leaf Blight (NCLB) (this study will mainly focus on these). With increasing drought conditions prevailing across Southern Africa and by extension across Africa, it is very vital that necessary mitigation measures are put in place to prevent additional loss of crop yield through diseases. This study introduces the development of Deep Learning (DL) Convolutional Neural Networks (CNNs) (note that in this thesis deep learning or convolution neural network or the combination of both will be used interchangeably to mean one thing) in order to classify the disease types and predict the severity of such diseases. The study focuses primarily on the CNNs, which are one of the tools that can be used for classifying images of various maize leaf diseases and in the severity prediction of Common Rust (CR) and Northern Corn Leaf Blight (NCLB). In essence the objectives of this study are: i. To create and test a CNN model that can classify various types of maize leaf diseases. ii. To set up and test a CNN model that can predict the severities of a maize leaf disease known as the maize CR. The model is to be a hybrid model because fuzzy logic rules are intended to be used with a CNN model. iii. To build and test a CNN model that can predict the severities of a maize leaf disease known as the NCLB by analysing lesion colour and sporulation patterns. This study follows a quantitative study of designing and developing CNN algorithms that will classify and predict the severities of maize leaf diseases. For instance, in Chapter 3 of this study, the CNN model for classifying various types of maize leaf diseases was set up on a Java Neuroph GUI (general user interface) framework. The CNN in this chapter achieved an average validation accuracy of 92.85% and accuracies of 87% to 99. 9% on separate class tests. In Chapter 4, the CNN model for the prediction of CR severities was based on fuzzy rules and thresholding methods. It achieved a validation accuracy of 95.63% and an accuracy 89% when tested on separate images of CR to make severity predictions among 4 classes of CR with various stages of the disease’ severities. Finally, in Chapter 5, the CNN that was set up to predict the severities of NCLB achieved 100% of validation accuracy in classification of the two NCLB severity stages. The model also passed the robustness test that was set up to test its ability of classifying the two NCLB stages as both stages were trained on images that had a cigar-shaped like lesions. The three objectives of this study are met in three separate chapters based on published journal papers. Finally, the research objectives were evaluated against the results obtained in these three separate chapters to summarize key research contributions made in this work.College of Engineering, Science and TechnologyPh. D. (Science, Engineering and Technology

    Assessing berry number for grapevine yield estimation by image analysis: case study with the white variety “Encruzado”

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoNowadays, yield estimation represents one of the most important topics in viticulture. It can lead to a better vineyard management and to a better organization of harvesting operations in the vineyard and in the cellar. In recent years, image analysis has become an important tool to improve yield forecast, with the advantages of saving time and being non-invasive. This research aims to estimate the yield of the white cultivar ‘Encruzado’ using visible berry number counted in the images aquired at veraison and near harvest, using a manual RGB camera and the robot VINBOT. Images were collected in laboratory and in the field at the experimental vineyard of the Instituto Superior de Agronomia (ISA) in Lisbon. In the field images the number of visible berries per canopy meter was higher at maturation than at veraison, respectively 72.6 and 66.3. Regarding the percentage of visible berries, 30.2% where visible at veraison and 24.1% at maturation. Concerning percentage of berries occluded by other berries it was observed 28.7% at veraison and 24.3% at maturation. Regression analysis showed that the number of berries in the image explained a very high proportion of bunch weight variability, R2=0.64 at veraison and 0.91 at maturation. Regression analysis also showed that the canopy porosity explained a very high proportion of visible berries variability, R2=0.81 at veraison and 0.88 at maturation. The obtained regression models underestimated the yield with an higher error at veraison than at maturation. This underestimation indicates that the use of visible berry number on the images to estimate yield still needs further research to improve the algorithms accuracyN/

    Development of a new non-invasive vineyard yield estimation method based on image analysis

    Get PDF
    Doutoramento em Engenharia Agronómica / Instituto Superior de Agronomia. Universidade de LisboaPredicting vineyard yield with accuracy can provide several advantages to the whole vine and wine industry. Today this is majorly done using manual and sometimes destructive methods, based on bunch samples. Yield estimation using computer vision and image analysis can potentially perform this task extensively, automatically, and non-invasively. In the present work this approach is explored in three main steps: image collection, occluded fruit estimation and image traits conversion to mass. On the first step, grapevine images were collected in field conditions along some of the main grapevine phenological stages. Visible yield components were identified in the image and compared to ground truth. When analyzing inflorescences and bunches, more than 50% were occluded by leaves or other plant organs, on three cultivars. No significant differences were observed on bunch visibility after fruit set. Visible bunch projected area explained an average of 49% of vine yield variation, between veraison and harvest. On the second step, vine images were collected, in field conditions, with different levels of defoliation intensity at bunch zone. A regression model was computed combining canopy porosity and visible bunch area, obtained via image analysis, which explained 70-84% of bunch exposure variation. This approach allowed for an estimation of the occluded fraction of bunches with average errors below |10|%. No significant differences were found between the model’s output at veraison and harvest. On the last step, the conversion of bunch image traits into mass was explored in laboratory and field conditions. In both cases, cultivar differences related to bunch architecture were found to affect weight estimation. A combination of derived variables which included visible bunch area, estimated total bunch area, visible bunch perimeter, visible berry number and bunch compactness was used to estimate yield on undisturbed grapevines. The final model achieved a R2 = 0.86 between actual and estimated yield (n = 213). If performed automatically, the final approach suggested in this work has the potential to provide a non-invasive method that can be performed accurately across whole vineyards.N/

    Development of in-field data acquisition systems and machine learning-based data processing and analysis approaches for turfgrass quality rating and peanut flower detection

    Get PDF
    Digital image processing and machine vision techniques provide scientists with an objective measure of crop quality that adds to the validity of study results without burdening the evaluation process. This dissertation aimed to develop in-field data acquisition systems and supervised machine learning-based data processing and analysis approaches for turfgrass quality classification and peanut flower detection. The new 3D Scanner App for Apple iPhone 12 Pro's camera with a LiDAR sensor provided high resolution of rendered turfgrass images. The battery life lasted for the entire time of data acquisition for an experimental field (49 m × 15 m size) that had 252 warm-season turfgrass plots. The utilized smartphone as an image acquisition tool at the same time achieved a similar outcome to the traditional image acquisition methods described in other studies. Experiments were carried out on turfgrass quality classification grouped into two classes (“Poor”, “Acceptable”) and four classes (“Very poor,” “Poor,” “Acceptable,” “High”) using supervised machine learning techniques. Gray-level Co-occurrence Matrix (GLCM) feature extractor with Random Forest classifier achieved the highest accuracy rate (81%) for the testing dataset for two classes. For four classes, Gabor filter was the best feature extractor and performed the best with Support Vector Machine (SVM) and XGBoost classifiers achieving 82% accuracy rates. The presented method will further assist researchers to develop a smartphone application for turfgrass quality rating. The study also applied deep learning-based features to feed machine learning classifiers. ResNet-101 deep feature extractor with SVM classifier achieved accuracy rate of 91% for two classes. ResNet-152 deep feature extractor with the SVM classifier achieved 86% accuracy rate for four classes. YOLOX-L and YOLOX-X models were compared with different data augmentation configurations to find the best YOLOX object detector for peanut flower detection. Peanut flowers were detected from images collected from a research field. YOLOX-X with weak data augmentation configurations achieved the highest mean average precision result at the Intersection over Union threshold of 50%. The presented method will further assist researchers in developing a counting method on flowers in images. The presented detection technique with required minor modifications can be implemented for other crops or flowers
    corecore