6 research outputs found

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    Image based Plant leaf disease detection using Deep learning

    Get PDF
    Agriculture is important for India. Every year growing variety of crops is at loss due to inefficiency in shipping, cultivation, pest infestation in crop and storage of government-subsidized crops.  There is reduction in production of good crops in both quality and quantity due to Plants being affected by diseases. Hence it is important for early detection and identification of diseases in plants. The proposed methodology consists of collection of Plant leaf dataset, Image preprocessing, Image Augmentation and Neural network training. The dataset is collected from ImageNet for training phase. The CNN technique is used to differentiate the healthy leaf from disease affected leaf. In image preprocessing resizing the image is carried out to reduce the training phase time. Image augmentation is performed in training phase by applying various transformation function on Plant images. The Network is trained by Caffenet deep learning framework. CNN is trained with ReLu (Rectified Linear Unit). The convolution base of CNN generates features from image through the multiple convolution layers and pooling layers. The classifier part of CNN classifies the image based on the features extracted from the convolution base. The classification is performed through the fully connected layers. The performance is measured using 10-fold cross validation function. The final layer uses activation function like softmax to categorize the outputs

    Plant Disease Detection: Electronic System Design Empowered with Artificial Intelligence

    Get PDF
    Today, plant diseases have become a major threat to the development of agriculture and forestry, not only affecting the normal growth of plants but also causing food safety problems. Hence, it is necessary to identify and detect disease regions and types of plants as quickly as possible. We have developed a plant monitoring system consisting of sensors and cameras for early detection of plant diseases. First, we create a dataset based on the data collected from the strawberry plants and then use our dataset as well as some well-established public datasets to evaluate and compare the recent deep learning-based plant disease detection studies. Finally, we propose a solution to identify plant diseases using a ResNet model with a novel variable learning rate which changes during the testing phase. We have explored different learning rates and found out that the highest accuracy for classification of healthy and unhealthy strawberry plants is obtained with the learning rate of 0.01 at 99.77%. Experimental results confirm the effectiveness of the proposed system in achieving high disease detection accuracy

    Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets

    Get PDF
    Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Søensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.We would like to thank BASF technicians Rainer Oberst, Gerd Kraemer, Hikal Gad, Javier Romero and Juan Manuel Contreras, as well as Amaia Ortiz-Barredo from Neiker for their support in the design of the experiments and the generation of the data sets used in this work. This was partially supported by the Basque Government through ELKARTEK project BASQNET(ref K-2021/00014)
    corecore