872 research outputs found

    Cereal grain and ear detection with convolutional neural networks

    Get PDF
    High computing power and data availability have made it possible to combine traditional farming with modern machine learning methods. The profitability and environmental friendliness of agriculture can be improved through automatic data processing. For example, applications related to computer vision are enabling automation of various tasks more and more efficiently. Computer vision is a field of study which centers on how computers gain understanding from digital images. A subfield of computer vision, called object detection focuses on mathematical techniques to detect, localize, and classify semantic objects in digital images. This thesis studies object detection methods that are based on convolutional neural networks and how they can be applied in precision agriculture to detect cereal grains and ears. Cultivation of pure-oats poses particular challenges for farmers. The fields need to be inspected regularly to ensure that the crop is not contaminated by other cereals. If the quantity of foreign cereals containing gluten exceeds a certain threshold per kilogram of weight, that crop cannot be used to produce gluten-free products. Detecting foreign grains and ears at the early stages of the growing season ensures the quality of the gluten-free crop.Suuri laskentateho ja tiedon saatavuus ovat mahdollistaneet modernien koneoppimismenetelmien käytön perinteisen maanviljelyn yhteydessä. Maatalouden kannattavuutta ja ympäristöystävällisyyttä voidaan parantaa automaattisen tietojenkäsittelyn avulla. Yhä useampia tehtäviä voidaan automatisoida tehokkaammin esimerkiksi tietokonenäön avulla. Tietokonenäkö on tutkimusala, joka tutkii sitä, miten tietokoneet ymmärtävät digitaalisten kuvien sisältämää informaatiota. Hahmontunnistus on yksi tietokonenäön osa-alueista, jossa keskitytään matemaattisiin tekniikoihin, joiden avulla kuvista havaitaan, paikallistetaan ja luokitellaan hahmoja. Puhdaskauran viljely asettaa viljelijöille erityisiä haasteita. Pellot on tarkistettava säännöllisesti, jolla varmistetaan se, että sato ei ole muiden viljojen saastuttama. Satoa ei voida käyttää gluteenittomien tuotteiden tuottamiseen, jos gluteenia sisältävien viljojen määrä ylittää sallitun rajan painokiloa kohden. Gluteenittoman sadon laatu voidaan varmistaa varhaisessa vaiheessa havaitsemalla vieraiden lajien jyvät ja tähkät

    Detection of Fusarium Head Blight in Wheat Grains Using Hyperspectral and RGB Imaging

    Get PDF
    In modern agriculture, it is imperative to ensure that crops are healthy and safe for consumption. Fusarium Head Blight (FHB) can cause significant damage to wheat grains by reducing essential components such as moisture, protein, and starch, while also introducing dangerous toxins. Therefore, accurately distinguishing between healthy and FHB-infected wheat grains is essential to guarantee stable and reliable wheat production while limiting financial losses and ensuring food safety. This thesis proposes effective methods to classify healthy and FHB infected wheat grains using Hyperspectral Imaging (HSI) and Red Green Blue (RGB) images. The approach includes a combination of Principal Component Analysis (PCA) with morphology, in addition to dark and white reference correction, to create masks for grains in each image. The classification for the hyperspectral images was achieved using a Partial Least Squares Discriminant Analysis (PLS-DA) model for hyperspectral images and a Convolutional Neural Network (CNN) model for RGB images. Both object-based and pixel-based approaches were compared for the PLS-DA model. The results indicated that the object-based approach outperformed the pixel-based approach and other well-known machine learning algorithms, including Random Forest (RF), linear Support Vector Machine (SVM), Stochastic Gradient Descent (SGD) calibrated one-vs-all and DecisionTree. The PLS-DA model using the object-based method yielded better results when tested on all wheat varieties, achieving an F1-score of 99.4%. Specific wavelengths were investigated based on a loading plot, and four effective wavelengths were identified, 953 nm, 1373 nm, 1923 nm and 2493 nm, with classification accuracy found to be similar to the full spectral range. Moreover, the moisture and water content in the grains were analyzed using hyperspectral images through an aquagram, which demonstrated that healthy grains exhibited higher absorbance values than infected grains for all Water Matrix Coordinates (WAMACS). Furthermore, the CNN model was trained on cropped individual grains, and the classification accuracy was similar to the PLS-DA model, with an F1- score of 98.1%. These findings suggest that HSI is suitable for identifying FHB-infected wheat grains, while RGB images may provide a cost-effective alternative to hyperspectral images for this specific classification task. Further research should consider to explore the potential benefits of HSI for deeper investigations into how water absorption affects spectral measurements and moisture content in grains, in addition to user-friendly interfaces for deep learning based image classification

    Implementation of Artificial Intelligence in Food Science, Food Quality, and Consumer Preference Assessment

    Get PDF
    In recent years, new and emerging digital technologies applied to food science have been gaining attention and increased interest from researchers and the food/beverage industries. In particular, those digital technologies that can be used throughout the food value chain are accurate, easy to implement, affordable, and user-friendly. Hence, this Special Issue (SI) is dedicated to novel technology based on sensor technology and machine/deep learning modeling strategies to implement artificial intelligence (AI) into food and beverage production and for consumer assessment. This SI published quality papers from researchers in Australia, New Zealand, the United States, Spain, and Mexico, including food and beverage products, such as grapes and wine, chocolate, honey, whiskey, avocado pulp, and a variety of other food products

    Big data analytics in high-throughput phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMitchell L. NeilsenAs the global population rises, advancements in plant diversity and crop yield is necessary for resource stability and nutritional security. In the next thirty years, the global population will pass 9 billion. Genetic advancements have become inexpensive and widely available to address this issue; however, phenotypic acquisition development has stagnated. Plant breeding programs have begun to support efforts in data mining, computer vision, and graphics to alleviate the gap from genetic advancements. This dissertation creates a bridge between computer vision research and phenotyping by designing and analyzing various deep neural networks for concrete applications while presenting new and novel approaches. The significant contributions are research advancements to the current state-of-the-art in mobile high-throughput phenotyping (HTP), which promotes more efficient plant science workflow tasks. Novel tools and utilities created for automatic code generation, maintenance, and source translation are featured. Promoted tools replace boiler-plate segments and redundant tasks. Finally, this research investigates various state-of-the-art deep neural network architectures to derive methods for object identification and enumeration. Seed kernel counting is a crucial task in the plant research workflow. This dissertation explains techniques and tools for generating data to scale training. New dataset creation methodologies are debuted and aim to replace the classical approach to labeling data. Although HTP is a general topic, this research focuses on various grains and plant-seed phenotypes. Applying deep neural networks to seed kernels for classification and object detection is a relatively new topic. This research uses a novel open-source dataset that supports future architectures for detecting kernels. State-of-the-art pre-trained regional convolutional neural networks (RCNN) perform poorly on seeds. The proposed counting architectures outperform the models above by focusing on learning a labeled integer count rather than anchor points for localization. Concurrently, pre-trained models on the seed dataset, a composition of geometrically primitive-like objects, boasts improvements to evaluation metrics in comparison to the Common Object in Context (COCO) dataset. A widely accepted problem in image processing is the segmentation of foreground objects from the background. This dissertation shows that state-of-the-art regional convolutional neural networks (RCNN) perform poorly in cases where foreground objects are similar to the background. Instead, transfer learning leverages salient features and boosts performance on noisy background datasets. The accumulation of new ideas and evidence of growth for mobile computer vision surmise a bright future for data-acquisition in various fields of HTP. The results obtained provide horizons and a solid foundation for future research to stabilize and continue the growth of phenotypic acquisition and crop yield

    Deep learning in food category recognition

    Get PDF
    Integrating artificial intelligence with food category recognition has been a field of interest for research for the past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The modern advent of big data and the development of data-oriented fields like deep learning have provided advancements in food category recognition. With increasing computational power and ever-larger food datasets, the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We survey the core components for constructing a machine learning system for food category recognition, including datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial Fund (RP202G0289)LIAS (P202ED10Data Science Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK Education Fund (OP202006)BBSRC (RM32G0178B8

    Image-based deep learning approaches for plant phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceDoina CarageaAbstract The genetic potential of plant traits remains unexplored due to challenges in available phenotyping methods. Deep learning could be used to build automatic tools for identifying, localizing and quantifying plant features based on agricultural images. This dissertation describes the development and evaluation of state-of-the-art deep learning approaches for several plant phenotyping tasks, including characterization of rice root anatomy based on microscopic root cross-section images, estimation of sorghum stomatal density and area based on microscopic images of leaf surfaces, and estimation of the chalkiness in rice exposed to high night temperature based on images of rice grains. For the root anatomy task, anatomical traits such as root, stele and late metaxylem were identified using a deep learning model based on Faster Region-based Convolutional Neural Network (Faster R-CNN) with the pre-trained VGG-16 as backbone. The model was trained on root cross-section images of roots, where the traits of interest were manually annotated as rectangular bounding boxes using the LabelImg tool. The traits were also predicted as rectangular bounding boxes, which were compared with the ground truth bounding boxes in terms of intersection over union metric to evaluate the detection accuracy. The predicted bounding boxes were subsequently used to estimate root and stele diameter, as well as late metaxylem count and average diameter. Experimental results showed that the trained models can accurately detect and quantify anatomical features, and are robust to image variations. It was also observed that using the pre-trained VGG-16 network enabled the training of accurate models with a relatively small number of annotated images, making this approach very attractive in terms of adaptations to new tasks. For estimating sorghum stomatal density and area, a deep learning approach for instance segmentation was used, specifically a Mask Region-based Convolutional Neural Network (Mask R-CNN), which produces pixel-level annotations of stomata objects. The pre-trained ResNet-101 network was used as the backbone of the model in combination with the feature pyramid network (FPN) that enables the model to identify objects at different scales. The Mask R-CNN model was trained on microscopic leaf surface images, where the stomata objects have been manually labeled at pixel level using the VGG Image Annotator tool. The predicted stomata masks were counted, and subsequently used to estimate the stomatal area. Experimental results showed a strong correlation between the predicted counts/stomatal area and the corresponding manually produced values. Furthermore, as for the root anatomy task, this study showed that very accurate results can be obtained with a relatively small number of annotated images. Working on the root anatomy detection and stomatal segmentation tasks showed that manually annotating data, in terms of bounding boxes and especially pixel-level masks, can be a tedious and time-consuming job, even when a relatively small number of annotated images are used for training. To address this challenge, for the task of estimating chalkiness based on images of rice grains exposed to high night temperatures, a weakly supervised approach was used, specifically, an approach based on Gradient-weighted Class Activation Mapping (Grad-CAM). Instead of performing pixel-level segmentation of the chalkiness in rice images, the weakly supervised approach makes use of high-level annotations of images as chalky or not-chalky. A convolutional neural network (e.g., ResNet-101) for binary classifi- cation is trained to distinguish between chalky and not-chalky images, and subsequently the gradients of the chalky class are used to determine a heatmap corresponding to the chalkiness area and also a chalkiness score for a grain. Experimental results on both polished and un- polished rice grains using standard instance classification and segmentation metrics showed that Grad-CAM can accurately identify chalky grains and detect the chalkiness area. The results also showed that the models trained on polished rice cannot be transferred between polished and unpolished rice, suggesting that new models need to be trained and fine-tuned for other types of rice grains and possibly images taken under different conditions. In conclusion, this dissertation first contributes to the field of deep learning by introducing new and challenging tasks that require adaptations of existing deep learning models. It also contributes to the field of agricultural image analysis and plant phenotyping by introducing fully automated high-throughput tools for identifying, localizing and quantifying plant traits that are of significant importance to breeding programs. All the datasets and models trained in this dissertation have been made publicly available to enable the deep learning community to use them and further advance the state-of-the-art on the challenging tasks addressed in this dissertation. The resulting tools have also been made publicly available as web servers to enable the plant breeding community to use them on images collected for tasks similar to those addressed here. Future work will focus on the adaptation of the models used in this dissertation to other similar tasks, and also on the development of similar models for other tasks relevant to the plant breeding community, to the agriculture community at large

    Wheat Yield Assessment Using In-Field Organ-Scale Phenotyping and Deep Learning Methods

    Full text link
    Phenwhea
    corecore