6 research outputs found

    Deep Learning-Based Method for Accurate Real-Time Seed Detection in Glass Bottle Manufacturing

    Get PDF
    Glass bottle-manufacturing companies produce bottles of different colors, shapes and sizes. One identified problem is that seeds appear in the bottle mainly due to the temperature and parameters of the oven. This paper presents a new system capable of detecting seeds of 0.1 mm2 in size in glass bottles as they are being manufactured, 24 h per day and 7 days per week. The bottles move along the conveyor belt at 50 m/min, at a production rate of 250 bottles/min. This new proposed method includes deep learning-based artificial intelligence techniques and classical image processing on images acquired with a high-speed line camera. The algorithm comprises three stages. First, the bottle is identified in the input image. Next, an algorithm based in thresholding and morphological operations is applied on this bottle region to locate potential candidates for seeds. Finally, a deep learning-based model can classify whether the proposed candidates are real seeds or not. This method manages to filter out most of false positives due to stains in the glass surface, while no real seeds are lost. The F1 achieved is 0.97. This method reveals the advantages of deep learning techniques for problems where classical image processing algorithms are not sufficient.This work was partially supported by OPENZDM project. This is a project from the European Union鈥檚 Horizon Europe research and innovation programme under Grant Agreement No. 101058673 in the call HORIZON-CL4-2021-TWIN-TRANSITION-0

    Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets

    Get PDF
    Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-S酶ensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.We would like to thank BASF technicians Rainer Oberst, Gerd Kraemer, Hikal Gad, Javier Romero and Juan Manuel Contreras, as well as Amaia Ortiz-Barredo from Neiker for their support in the design of the experiments and the generation of the data sets used in this work. This was partially supported by the Basque Government through ELKARTEK project BASQNET(ref K-2021/00014)

    Analysis of Few-Shot Techniques for Fungal Plant Disease Classification and Evaluation of Clustering Capabilities Over Real Datasets

    Get PDF
    Plant fungal diseases are one of the most important causes of crop yield losses. Therefore, plant disease identification algorithms have been seen as a useful tool to detect them at early stages to mitigate their effects. Although deep-learning based algorithms can achieve high detection accuracies, they require large and manually annotated image datasets that is not always accessible, specially for rare and new diseases. This study focuses on the development of a plant disease detection algorithm and strategy requiring few plant images (Few-shot learning algorithm). We extend previous work by using a novel challenging dataset containing more than 100,000 images. This dataset includes images of leaves, panicles and stems of five different crops (barley, corn, rape seed, rice, and wheat) for a total of 17 different diseases, where each disease is shown at different disease stages. In this study, we propose a deep metric learning based method to extract latent space representations from plant diseases with just few images by means of a Siamese network and triplet loss function. This enhances previous methods that require a support dataset containing a high number of annotated images to perform metric learning and few-shot classification. The proposed method was compared over a traditional network that was trained with the cross-entropy loss function. Exhaustive experiments have been performed for validating and measuring the benefits of metric learning techniques over classical methods. Results show that the features extracted by the metric learning based approach present better discriminative and clustering properties. Davis-Bouldin index and Silhouette score values have shown that triplet loss network improves the clustering properties with respect to the categorical-cross entropy loss. Overall, triplet loss approach improves the DB index value by 22.7% and Silhouette score value by 166.7% compared to the categorical cross-entropy loss model. Moreover, the F-score parameter obtained from the Siamese network with the triplet loss performs better than classical approaches when there are few images for training, obtaining a 6% improvement in the F-score mean value. Siamese networks with triplet loss have improved the ability to learn different plant diseases using few images of each class. These networks based on metric learning techniques improve clustering and classification results over traditional categorical cross-entropy loss networks for plant disease identification.This project was partially supported by the Spanish Government through CDTI Centro para el Desarrollo Tecnol贸gico e Industrial project AI4ES (ref CER-20211030), by the University of the Basque Country (UPV/EHU) under grant COLAB20/01 and by the Basque Government through grant IT1229-19

    MRI Deep Learning-Based Solution for Alzheimer鈥檚 Disease Prediction

    Get PDF
    Background: Alzheimer鈥檚 is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Al though tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. Methods: Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer鈥檚 diagnosis is proposed and compared with previous literature works. Results: Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). Conclusions: Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer鈥檚-assisted diagnosis based on MRI data.This work was partially supported by the SUPREME project. This project has received funding from the Basque Government鈥檚 Industry Department HAZITEK program under agreement ZE-2019/00022. This research has also received funding from the Basque Government鈥檚 Industry Department under the ELKARTEK program鈥檚 project ONKOTOOLS under agreement KK-2020/0006

    Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation

    Get PDF
    Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions

    Deep convolutional neural network for damaged vegetation segmentation from RGB images based on virtual NIR-channel estimation

    No full text
    Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health, weed presence and phenological state, among others. Traditionally, models based on normalized difference vegetation index (NDVI), near infrared channel (NIR) or RGB have been a good indicator of vegetation presence. However, these methods are not suitable for accurately segmenting vegetation showing damage, which precludes their use for downstream phenotyping algorithms. In this paper, we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation. The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image. Second, we compute two newly proposed vegetation indices from this estimated virtual NIR: the infrared-dark channel subtraction (IDCS) and infrared-dark channel ratio (IDCR) indices. Finally, both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition. The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days. The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel (F1=0.94) and with the proposed IDCR and IDCS vegetation indices (F1=0.95) derived from the estimated NIR channel, while the use of only the image or RGB indices lead to inferior performance (RGB(F1=0.90) NIR(F1=0.82) or NDVI(F1=0.89) channel). The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions
    corecore