5 research outputs found

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    Crop leaf disease detection and classification using machine learning and deep learning algorithms by visual symptoms: a review

    Get PDF
    A Quick and precise crop leaf disease detection is important to increasing agricultural yield in a sustainable manner. We present a comprehensive overview of recent research in the field of crop leaf disease prediction using image processing (IP), machine learning (ML) and deep learning (DL) techniques in this paper. Using these techniques, crop leaf disease prediction made it possible to get notable accuracies. This article presents a survey of research papers that presented the various methodologies, analyzes them in terms of the dataset, number of images, number of classes, algorithms used, convolutional neural networks (CNN) models employed, and overall performance achieved. Then, suggestions are prepared on the most appropriate algorithms to deploy in standard, mobile/embedded systems, Drones, Robots and unmanned aerial vehicles (UAV). We discussed the performance measures used and listed some of the limitations and future works that requires to be focus on, to extend real time automated crop leaf disease detection system

    Species196: A One-Million Semi-supervised Dataset for Fine-grained Species Recognition

    Full text link
    The development of foundation vision models has pushed the general visual recognition to a high level, but cannot well address the fine-grained recognition in specialized domain such as invasive species classification. Identifying and managing invasive species has strong social and ecological value. Currently, most invasive species datasets are limited in scale and cover a narrow range of species, which restricts the development of deep-learning based invasion biometrics systems. To fill the gap of this area, we introduced Species196, a large-scale semi-supervised dataset of 196-category invasive species. It collects over 19K images with expert-level accurate annotations Species196-L, and 1.2M unlabeled images of invasive species Species196-U. The dataset provides four experimental settings for benchmarking the existing models and algorithms, namely, supervised learning, semi-supervised learning, self-supervised pretraining and zero-shot inference ability of large multi-modal models. To facilitate future research on these four learning paradigms, we conduct an empirical study of the representative methods on the introduced dataset. The dataset is publicly available at https://species-dataset.github.io/.Comment: Accepted by NeurIPS 2023 Track Datasets and Benchmark

    AgriPest: A large-scale domain-specific benchmark dataset for practical agricultural pest detection in the wild

    Get PDF
    The recent explosion of large volume of standard dataset of annotated images has offered promising opportunities for deep learning techniques in effective and efficient object detection applications. However, due to a huge difference of quality between these standardized dataset and practical raw data, it is still a critical problem on how to maximize utilization of deep learning techniques in practical agriculture applications. Here, we introduce a domain-specific benchmark dataset, called AgriPest, in tiny wild pest recognition and detection, providing the researchers and communities with a standard large-scale dataset of practically wild pest images and annotations, as well as evaluation procedures. During the past seven years, AgriPest captures 49.7K images of four crops containing 14 species of pests by our designed image collection equipment in the field environment. All of the images are manually annotated by agricultural experts with up to 264.7K bounding boxes of locating pests. This paper also offers a detailed analysis of AgriPest where the validation set is split into four types of scenes that are common in practical pest monitoring applications. We explore and evaluate the performance of state-of-the-art deep learning techniques over AgriPest. We believe that the scale, accuracy, and diversity of AgriPest can offer great opportunities to researchers in computer vision as well as pest monitoring applications

    Estudo e análise de Redes Neurais Convolucionais Profundas na identificação de doenças em plantas por imagens

    Get PDF
    Tese (doutorado) — Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2022.Rede Neurais Convolucionais (CNNs), demonstram um potencial para tarefas relacionadas à Visão Computacional. A característica de maior destaque das CNNs é sua capacidade de explorar a correlação espacial ou temporal nos dados. Assim, várias melhorias na metodologia e arquitetura de aprendizagem das redes foram realizadas para tornar as CNNs escaláveis para problemas grandes, heterogêneos, complexos e multiclasses. A agricultura delimita um escopo de problemas desafiadores, que carecem de tecnologias para proporcionar maior incremento na produção agrícola, principalmente em relação ao enfrentamento de doenças. As doenças de plantas são consideradas um dos principais fatores que influenciam a produção de alimentos, e a sua identificação é primordialmente realizada por técnicas manuais ou por microscopia, oque aumenta o tempo de diagnóstico e as possibilidades de erro. Soluções automatizadas de identificação de doenças de plantas, usando imagens e aprendizado de máquina, em especial as CNNs, têm proporcionado avanços significativos. Entretanto, a maioria das abordagens possui baixa capacidade de classificação, tendo como agravante as infestações simultâneas por diferentes patógenos e as confusões sintomáticas causadas por fatores abióticos. Assim, o objetivo deste trabalho é analisar e avaliar as arquiteturas CNNs, explorando potencialidades e prospectando novos arranjos de arquitetura para classificar doenças de plantas e identificar patógenos. A abordagem fez uso de uma estratégia de customização, na qual redes operativas independentes ou blocos convolucionais são integradas em um único modelo para capturar um conjunto mais variado de características. A NEMANeté um resultado relevante desta abordagem de customização de CNNs para classificação de fitonematoides em imagens microscópicas. O mo-delo alcançou a melhor taxa de acurácia atingindo 99,35%, possibilitando melhorias gerais de precisão superiores a 6,83% e 4,1%, para treinamento com inicialização dos pesos e para transferência de aprendizagem, em comparação com outras arquiteturas avaliadas. Os resultados demonstraram que a customização de arquiteturas CNNs é uma abordagem promissora para o aumento de ganhos em termo de acurácia, convergência das redes e tamanho dos modelos.Convolutional Neural Networks (CNNs) demonstrate a potential for computer vision tasks.The most prominent feature of CNNs is their ability to explore spatial or temporal correlationin the data. Thus, several improvements in the methodology and architecture of learning of thenetworks were made to make the CNNs scalable for large, heterogeneous, complex, and multi-class problems. Agriculture delimits a scope of challenging problems, which lack technologiesto increase agricultural production, especially about coping with diseases. Plant diseases areconsidered one of the main factors that influence food production, and their identification is pri-marily performed by manual techniques or microscopy, which increases the time of diagnosisand the possibility of errors. Using imaging and machine learning, especially CNNs, automatedplant disease identification solutions have provided significant advances. However, most appro-aches have low classification capacity, with simultaneous infestations by different pathogensand symptomatic confusion caused by abiotic factors as an aggravating factor. Thus, this workaims to analyze and evaluate CNN architectures, exploring potentialities and prospecting newarchitectural arrangements to classify plant diseases and identify pathogens. The approach useda customization strategy, in which independent operative networks or convolutional blocks areintegrated into a single model to capture a more varied set of characteristics. TheNEMANetis arelevant result of this CNN customization approach for the classification of phytonematodes inmicroscopic images. The model achieved the best accuracy rate reaching 99.35%, enabling ove-rall accuracy improvements greater than 6.83% and 4.1%, for weight initialization training andlearning transfer, compared to other evaluated architectures. The results showed that the custo-mization of CNN architectures is a promising approach to increase gains in terms of accuracy,the convergence of networks, and the size of the model
    corecore