899 research outputs found

    Techniques of deep learning and image processing in plant leaf disease detection: a review

    Get PDF
    Computer vision techniques are an emerging trend today. Digital image processing is gaining popularity because of the significant upsurge in the usage of digital images over the internet. Digital image processing is a practice that can help in designing sophisticated high-end machines, which can hold the ophthalmic functionality of the human eye. In agriculture, leaf examination is important for disease identification and fair warning for any deficiency within the plant. Many prominent plant species are facing extinction because of a lack of knowledge. A proper realization of computer vision techniques aid in extracting a significant amount of information from leaf image. This necessitates the requirement of an automatic leaf disease detection method to diagnose disease occurrences and severity, for timely crop management, by spraying pesticides. This study focuses on techniques of digital image processing and machine learning rendered in plant leaf disease detection, which has great potential in precision agriculture. To support this study, techniques exercised by various researchers in recent years are tabulated

    Towards Multi-Level Classification in Deep Plant Identification

    Get PDF
    Tesis de Graduación (Doctorado académico en Ingeniería) Instituto Tecnológico de Costa Rica, 2018.In the last decade, automatic identification of organisms based on computer vision techniques has been a hot topic for both biodiversity scientists and machine learning specialists. Early on, plants became particularly attractive as a subject of study for two main reasons. On the one hand, quick and accurate inventories of plants are critical for biodiversity conservation; for example, they are indispensable in conducting ecosystem inventories, defining models for environmental service payments, and tracking populations of invasive plant species, among others. On the other hand, plants are a more tractable group than, for instance, insects. First of all, the number of species is smaller (around 400,000 compared to more than 8 million). Secondly, they are better understood by the scientific community, particularly with respect to their morphometric features. Thirdly, there are large, fast growing databases of digital images of plants generated by both scientists and the general public. Finally, an incremental approach based first on "flat elements" such as leaves and then the whole plant made it feasible to use computer vision techniques early on. As a result, even mobile apps for the general public are available nowadays. This document presents the key results obtained while tackling the general problem of fully automating the identification of plant species based solely on images. It describes the key findings in a research path that started with a restricted scope, namely, identification of plants from Costa Rica by using a morphometric approach that considers images of fresh leaves only. Then, species from other regions of the world were included, but still using hand-crafted feature extractors. A key methodological turn was the subsequent use of Deep Learning techniques on images of any components of a plant. Then we studied and compared the accuracy of a Deep Learning approach to do identifications based on datasets of images of fresh plants and compared it with datasets of herbarium sheet images for the first time. Among the results obtained during this research, potential biases in automatic plant identification dataset were found and characterized. Feasibility of doing transfer learning between different regions of the world was also proven. Even more importantly, it was for the first time demonstrated that herbarium sheets are a good resource to do identifications of plants mounted on herbarium sheets, which provides additional levels of importance to herbaria around the globe. Finally, as a culmination of this research path, this document presents the results of developing a novel multi-level classification approach that uses knowledge about higher taxonomic levels to carry out not only family and genus level identifications but also to try to improve the accuracy of species level identifications. This last step focuses on the creation of a hierarchical loss function based on known plant taxonomies, coupled with multilevel Deep Learning architectures to guide the model optimization with the prior knowledge of a given class hierarchy.En la última década, la identificación automática de organismos basada en técnicas de visión artificial ha sido un tema popular tanto entre los científicos de la biodiversidad como para los especialistas en aprendizaje automático. Al principio, las plantas se volvieron particularmente atractivas como tema de estudio por dos razones principales. Por un lado, los inventarios rápidos y precisos de plantas son críticos para la conservación de la biodiversidad; por ejemplo, son indispensables para realizar inventarios de ecosistemas, definir modelos para pagos de servicios ambientales y rastrear poblaciones de especies de plantas invasoras, entre otros. Por otro lado, las plantas son un grupo más manejable que, por ejemplo, los insectos. En primer lugar, la cantidad de especies es menor (alrededor de 400,000 en comparación con más de 8 millones de insectos). En segundo lugar, la comunidad científica las comprende mejor, en particular con respecto a sus características morfométricas. En tercer lugar, existen grandes bases de datos de imágenes digitales de plantas generadas tanto por científicos como por el público en general. Finalmente, un enfoque incremental basado primero en "elementos planos" como hojas y luego en toda la planta hizo posible el uso de técnicas de visión por computadora desde el principio. Como resultado, incluso las aplicaciones móviles para el público en general están disponibles en la actualidad. Este documento presenta los resultados clave obtenidos mientras se aborda el problema general de automatizar por completo la identificación de especies de plantas basándose únicamente en imágenes. Describe los hallazgos clave en un camino de investigación que comenzó con un alcance restringido, a saber, la identificación de plantas de Costa Rica mediante el uso de un enfoque morfométrico que considera imágenes de hojas frescas solamente. Luego, se incluyeron especies de otras regiones del mundo, pero todavía se utilizaban extractores de características hechos a mano. Un giro metodológico clave fue el uso posterior de técnicas de aprendizaje profundo (deep learning) en imágenes de cualquier componente de una planta. Luego, estudiamos y comparamos la exactitud de un enfoque de aprendizaje profundo para realizar identificaciones basadas en conjuntos de datos de imágenes de plantas frescas y las comparamos con conjuntos de datos de imágenes de hojas de herbario por primera vez. Entre los resultados obtenidos durante esta investigación, se encontraron y caracterizaron posibles sesgos en el conjunto de datos de identificación automática de plantas. La viabilidad de hacer un aprendizaje de transferencia (transfer learning) entre diferentes regiones del mundo también se demostró. Aún más importante, por primera vez se demostró que las láminas de herbario son un buen recurso para hacer identificaciones de plantas montadas sobre láminas de herbario, lo que proporciona niveles adicionales de importancia para herbarios en todo el mundo. Finalmente, como una culminación de este camino de investigación, este documento presenta los resultados del desarrollo de un nuevo enfoque de clasificación multi-nivel (multi-level) que utiliza el conocimiento sobre niveles taxonómicos superiores para llevar a cabo identificaciones a nivel de familia y género, y también para tratar de mejorar la exactitud de identificaciones a nivel de especie. Este último paso se centra en la creación de una función de pérdida jerárquica basada en taxonomías de plantas conocidas, junto con arquitecturas de aprendizaje profundo de niveles múltiples para guiar la optimización del modelo con el conocimiento previo de una jerarquía de clases dada

    AlexNet-Based Feature Extraction for Cassava Classification: A Machine Learning Approach

    Get PDF
    تعتبر الكسافا محصولًا مهمًا في أجزاء كثيرة من العالم، لا سيما في إفريقيا وآسيا وأمريكا الجنوبية، حيث تعمل كغذاء أساسي لملايين الأشخاص. يعتبر استخدام ميزات اللون والملمس والشكل أقل كفاءة في تصنيف أنواع الكسافا. وذلك لأن أوراق الكسافا لها نفس لون مورفولوجيا بين نوع وآخر. بالإضافة إلى ذلك، فإن أوراق الكسافا لها شكل مشابه نسبيًا لنوع واحد من الكسافا، وبالمثل، مع قوام أوراق الكسافا. إلى جانب ذلك، هناك أيضًا المنيهوت السامة. الكسافا السامة وغير السامة لها لون وشكل وملمس أوراق متطابق نسبيًا. يهدف هذا البحث إلى تصنيف أنواع الكسافا باستخدام طريقة التعلم العميق مع AlexNet المدربة مسبقًا كمستخرج للميزات. تم استخدام ثلاث طبقات مختلفة متصلة بالكامل لاستخراج السمات، وهي fc6 و fc7 و fc8. كانت المصنفات المستخدمة هي Support Vector Machine (SVM) و K-Nearest Neighbours (KNN) و Naive Bayes. تتكون مجموعة البيانات من 1400 صورة لأوراق الكسافا تتكون من أربعة أنواع من الكسافا: Gajah و Manggu و Kapok و Beracun. أوضحت النتائج أن أفضل طبقة استخلاص كانت fc6 وبدقة 90.7٪ للطبقة المتناهية الصغر (SVM). كان أداء SVM أيضًا أفضل مقارنةً بـ KNN و Naive Bayes، بدقة 90.7٪، وحساسية 83.5٪، ونوعية 93.7٪، ودرجة F1 83.5٪. ستساهم نتائج هذا البحث في تطوير تقنيات تصنيف النباتات، وتوفير رؤى حول الاستخدام الأمثل للتعلم العميق وطرق التعلم الآلي لتحديد الأنواع النباتية. في النهاية، يمكن للنهج المقترح أن يساعد الباحثين والمزارعين وعلماء البيئة في تحديد الأنواع النباتية ومراقبة النظام البيئي والإدارة الزراعية.Cassava, a significant crop in Africa, Asia, and South America, is a staple food for millions. However, classifying cassava species using conventional color, texture, and shape features is inefficient, as cassava leaves exhibit similarities across different types, including toxic and non-toxic varieties. This research aims to overcome the limitations of traditional classification methods by employing deep learning techniques with pre-trained AlexNet as the feature extractor to accurately classify four types of cassava: Gajah, Manggu, Kapok, and Beracun. The dataset was collected from local farms in Lamongan Indonesia. To collect images with agricultural research experts, the dataset consists of 1,400 images, and each type of cassava has 350 images. Three fully connected (FC) layers were utilized for feature extraction, namely fc6, fc7, and fc8. The classifiers employed were support vector machine (SVM), k-nearest neighbors (KNN), and Naive Bayes. The study demonstrated that the most effective feature extraction layer was fc6, achieving an accuracy of 90.7% with SVM. SVM outperformed KNN and Naive Bayes, exhibiting an accuracy of 90.7%, sensitivity of 83.5%, specificity of 93.7%, and F1-score of 83.5%. This research successfully addressed the challenges in classifying cassava species by leveraging deep learning and machine learning methods, specifically with SVM and the fc6 layer of AlexNet. The proposed approach holds promise for enhancing plant classification techniques, benefiting researchers, farmers, and environmentalists in plant species identification, ecosystem monitoring, and agricultural management

    Recognising Ayurvedic Herbal Plants in Sri Lanka using Convolutional Neural Networks

    Get PDF
    Different parts of ayurvedic herbal plants are used to make ayurvedic medicines in Sri Lanka. Recognising these endemic herbal plants is a challenging problem in the fields of ayurvedic medicine, computer vision, and machine learning. In this research, a computer system has been developed to recognise ayurvedic plant leaves in Sri Lanka based on a recently developed machine learning algorithm: convolutional neural networks (CNNs). Convolutional neural networks with RGB and grayscale images and multi-layer neural networks with RGB images have been used to recognise the ayurvedic plant leaves. In order to train neural networks, images of 17 types of herbal plant leaves were captured from the plant nursery of Navinna Ayurveda Medical Hospital, Sri Lanka. As CNNs require a large number of images to train it, various data augmenting methods have been applied to the collected dataset to increase the size of the dataset. Backgrounds of images were removed and all images were resized to 256 by 256 pixels before submitting them to a neural network. The results obtained were highly significant and CNN with RGB images was able to achieve an accuracy of 97.71% for recognising ayurvedic herbal plant leaves in Sri Lanka. The study suggests that CNNs can be used to recognise ayurvedic herbal plants.Keywords: deep learning, traditional ayurvedic plants, convolutional neural networks, multi-layer neural networks, image recognition, computer visio

    Plant Disease Diagnosing Based on Deep Learning Techniques: A Survey and Research Challenges

    Get PDF
    Agriculture crops are highly significant for the sustenance of human life and act as an essential source for national income development worldwide. Plant diseases and pests are considered one of the most imperative factors influencing food production, quality, and minimize losses in production. Farmers are currently facing difficulty in identifying various plant diseases and pests, which are important to prevent plant diseases effectively in a complicated environment. The recent development of deep learning techniques has found use in the diagnosis of plant diseases and pests, providing a robust tool with highly accurate results. In this context, this paper presents a comprehensive review of the literature that aims to identify the state of the art of the use of convolutional neural networks (CNNs) in the process of diagnosing and identification of plant pest and diseases. In addition, it presents some issues that are facing the models performance, and also indicates gaps that should be addressed in the future. In this regard, we review studies with various methods that addressed plant disease detection, dataset characteristics, the crops, and pathogens. Moreover, it discusses the commonly employed five-step methodology for plant disease recognition, involving data acquisition, preprocessing, segmentation, feature extraction, and classification. It discusses various deep learning architecture-based solutions that have a faster convergence rate of plant disease recognition. From this review, it is possible to understand the innovative trends regarding the use of CNN’s algorithms in the plant diseases diagnosis and to recognize the gaps that need the attention of the research community

    CASM-AMFMNet: A Network based on Coordinate Attention Shuffle Mechanism and Asymmetric Multi-Scale Fusion Module for Classification of Grape Leaf Diseases

    Get PDF
    Grape disease is a significant contributory factor to the decline in grape yield, typically affecting the leaves first. Efficient identification of grape leaf diseases remains a critical unmet need. To mitigate background interference in grape leaf feature extraction and improve the ability to extract small disease spots, by combining the characteristic features of grape leaf diseases, we developed a novel method for disease recognition and classification in this study. First, Gaussian filters Sobel smooth de-noising Laplace operator (GSSL) was employed to reduce image noise and enhance the texture of grape leaves. A novel network designated coordinated attention shuffle mechanism-asymmetric multi-scale fusion module net (CASM-AMFMNet) was subsequently applied for grape leaf disease identification. CoAtNet was employed as the network backbone to improve model learning and generalization capabilities, which alleviated the problem of gradient explosion to a certain extent. The CASM-AMFMNet was further utilized to capture and target grape leaf disease areas, therefore reducing background interference. Finally, Asymmetric multi-scale fusion module (AMFM) was employed to extract multi-scale features from small disease spots on grape leaves for accurate identification of small target diseases. The experimental results based on our self-made grape leaf image dataset showed that, compared to existing methods, CASM-AMFMNet achieved an accuracy of 95.95%, F1 score of 95.78%, and mAP of 90.27%. Overall, the model and methods proposed in this report could successfully identify different diseases of grape leaves and provide a feasible scheme for deep learning to correctly recognize grape diseases during agricultural production that may be used as a reference for other crops diseases

    CASM-AMFMNet: A Network based on Coordinate Attention Shuffle Mechanism and Asymmetric Multi-Scale Fusion Module for Classification of Grape Leaf Diseases

    Get PDF
    Grape disease is a significant contributory factor to the decline in grape yield, typically affecting the leaves first. Efficient identification of grape leaf diseases remains a critical unmet need. To mitigate background interference in grape leaf feature extraction and improve the ability to extract small disease spots, by combining the characteristic features of grape leaf diseases, we developed a novel method for disease recognition and classification in this study. First, Gaussian filters Sobel smooth de-noising Laplace operator (GSSL) was employed to reduce image noise and enhance the texture of grape leaves. A novel network designated coordinated attention shuffle mechanism-asymmetric multi-scale fusion module net (CASM-AMFMNet) was subsequently applied for grape leaf disease identification. CoAtNet was employed as the network backbone to improve model learning and generalization capabilities, which alleviated the problem of gradient explosion to a certain extent. The CASM-AMFMNet was further utilized to capture and target grape leaf disease areas, therefore reducing background interference. Finally, Asymmetric multi-scale fusion module (AMFM) was employed to extract multi-scale features from small disease spots on grape leaves for accurate identification of small target diseases. The experimental results based on our self-made grape leaf image dataset showed that, compared to existing methods, CASM-AMFMNet achieved an accuracy of 95.95%, F1 score of 95.78%, and mAP of 90.27%. Overall, the model and methods proposed in this report could successfully identify different diseases of grape leaves and provide a feasible scheme for deep learning to correctly recognize grape diseases during agricultural production that may be used as a reference for other crops diseases

    Local Binary Pattern based algorithms for the discrimination and detection of crops and weeds with similar morphologies

    Get PDF
    In cultivated agricultural fields, weeds are unwanted species that compete with the crop plants for nutrients, water, sunlight and soil, thus constraining their growth. Applying new real-time weed detection and spraying technologies to agriculture would enhance current farming practices, leading to higher crop yields and lower production costs. Various weed detection methods have been developed for Site-Specific Weed Management (SSWM) aimed at maximising the crop yield through efficient control of weeds. Blanket application of herbicide chemicals is currently the most popular weed eradication practice in weed management and weed invasion. However, the excessive use of herbicides has a detrimental impact on the human health, economy and environment. Before weeds are resistant to herbicides and respond better to weed control strategies, it is necessary to control them in the fallow, pre-sowing, early post-emergent and in pasture phases. Moreover, the development of herbicide resistance in weeds is the driving force for inventing precision and automation weed treatments. Various weed detection techniques have been developed to identify weed species in crop fields, aimed at improving the crop quality, reducing herbicide and water usage and minimising environmental impacts. In this thesis, Local Binary Pattern (LBP)-based algorithms are developed and tested experimentally, which are based on extracting dominant plant features from camera images to precisely detecting weeds from crops in real time. Based on the efficient computation and robustness of the first LBP method, an improved LBP-based method is developed based on using three different LBP operators for plant feature extraction in conjunction with a Support Vector Machine (SVM) method for multiclass plant classification. A 24,000-image dataset, collected using a testing facility under simulated field conditions (Testbed system), is used for algorithm training, validation and testing. The dataset, which is published online under the name “bccr-segset”, consists of four subclasses: background, Canola (Brassica napus), Corn (Zea mays), and Wild radish (Raphanus raphanistrum). In addition, the dataset comprises plant images collected at four crop growth stages, for each subclass. The computer-controlled Testbed is designed to rapidly label plant images and generate the “bccr-segset” dataset. Experimental results show that the classification accuracy of the improved LBP-based algorithm is 91.85%, for the four classes. Due to the similarity of the morphologies of the canola (crop) and wild radish (weed) leaves, the conventional LBP-based method has limited ability to discriminate broadleaf crops from weeds. To overcome this limitation and complex field conditions (illumination variation, poses, viewpoints, and occlusions), a novel LBP-based method (denoted k-FLBPCM) is developed to enhance the classification accuracy of crops and weeds with similar morphologies. Our contributions include (i) the use of opening and closing morphological operators in pre-processing of plant images, (ii) the development of the k-FLBPCM method by combining two methods, namely, the filtered local binary pattern (LBP) method and the contour-based masking method with a coefficient k, and (iii) the optimal use of SVM with the radial basis function (RBF) kernel to precisely identify broadleaf plants based on their distinctive features. The high performance of this k-FLBPCM method is demonstrated by experimentally attaining up to 98.63% classification accuracy at four different growth stages for all classes of the “bccr-segset” dataset. To evaluate performance of the k-FLBPCM algorithm in real-time, a comparison analysis between our novel method (k-FLBPCM) and deep convolutional neural networks (DCNNs) is conducted on morphologically similar crops and weeds. Various DCNN models, namely VGG-16, VGG-19, ResNet50 and InceptionV3, are optimised, by fine-tuning their hyper-parameters, and tested. Based on the experimental results on the “bccr-segset” dataset collected from the laboratory and the “fieldtrip_can_weeds” dataset collected from the field under practical environments, the classification accuracies of the DCNN models and the k-FLBPCM method are almost similar. Another experiment is conducted by training the algorithms with plant images obtained at mature stages and testing them at early stages. In this case, the new k-FLBPCM method outperformed the state-of-the-art CNN models in identifying small leaf shapes of canola-radish (crop-weed) at early growth stages, with an order of magnitude lower error rates in comparison with DCNN models. Furthermore, the execution time of the k-FLBPCM method during the training and test phases was faster than the DCNN counterparts, with an identification time difference of approximately 0.224ms per image for the laboratory dataset and 0.346ms per image for the field dataset. These results demonstrate the ability of the k-FLBPCM method to rapidly detect weeds from crops of similar appearance in real time with less data, and generalize to different size plants better than the CNN-based methods

    Photometric stereo for three-dimensional leaf venation extraction

    Get PDF
    © 2018 Elsevier B.V. Leaf venation extraction studies have been strongly discouraged by considerable challenges posed by venation architectures that are complex, diverse and subtle. Additionally, unpredictable local leaf curvatures, undesirable ambient illuminations, and abnormal conditions of leaves may coexist with other complications. While leaf venation extraction has high potential for assisting with plant phenotyping, speciation and modelling, its investigations to date have been confined to colour image acquisition and processing which are commonly confounded by the aforementioned biotic and abiotic variations. To bridge the gaps in this area, we have designed a 3D imaging system for leaf venation extraction, which can overcome dark or bright ambient illumination and can allow for 3D data reconstruction in high resolution. We further propose a novel leaf venation extraction algorithm that can obtain illumination-independent surface normal features by performing Photometric Stereo reconstruction as well as local shape measures by fusing the decoupled shape index and curvedness features. In addition, this algorithm can determine venation polarity – whether veins are raised above or recessed into a leaf. Tests on both sides of different leaf species with varied venation architectures show that the proposed method is accurate in extracting the primary, secondary and even tertiary veins. It also proves to be robust against leaf diseases which can cause dramatic changes in colour. The effectiveness of this algorithm in determining venation polarity is verified by it correctly recognising raised or recessed veins in nine different experiments
    corecore