10,847 research outputs found

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case

    Get PDF
    Disease diagnosis based on the detection of early symptoms is a usual threshold taken into account for integrated pest management strategies. Early phytosanitary treatment minimizes yield losses and increases the efficacy and efficiency of the treatments. However, the appearance of new diseases associated to new resistant crop variants complicates their early identification delaying the application of the appropriate corrective actions. The use of image based automated identification systems can leverage early detection of diseases among farmers and technicians but they perform poorly under real field conditions using mobile devices. A novel image processing algorithm based on candidate hot-spot detection in combination with statistical inference methods is proposed to tackle disease identification in wild conditions. This work analyses the performance of early identification of three European endemic wheat diseases – septoria, rust and tan spot. The analysis was done using 7 mobile devices and more than 3500 images captured in two pilot sites in Spain and Germany during 2014, 2015 and 2016. Obtained results reveal AuC (Area under the Receiver Operating Characteristic –ROC– Curve) metrics higher than 0.80 for all the analyzed diseases on the pilot tests under real conditions

    Robust automated detection of microstructural white matter degeneration in Alzheimer’s disease using machine learning classification of multicenter DTI data

    Get PDF
    Diffusion tensor imaging (DTI) based assessment of white matter fiber tract integrity can support the diagnosis of Alzheimer’s disease (AD). The use of DTI as a biomarker, however, depends on its applicability in a multicenter setting accounting for effects of different MRI scanners. We applied multivariate machine learning (ML) to a large multicenter sample from the recently created framework of the European DTI study on Dementia (EDSD). We hypothesized that ML approaches may amend effects of multicenter acquisition. We included a sample of 137 patients with clinically probable AD (MMSE 20.6±5.3) and 143 healthy elderly controls, scanned in nine different scanners. For diagnostic classification we used the DTI indices fractional anisotropy (FA) and mean diffusivity (MD) and, for comparison, gray matter and white matter density maps from anatomical MRI. Data were classified using a Support Vector Machine (SVM) and a Naïve Bayes (NB) classifier. We used two cross-validation approaches, (i) test and training samples randomly drawn from the entire data set (pooled cross-validation) and (ii) data from each scanner as test set, and the data from the remaining scanners as training set (scanner-specific cross-validation). In the pooled cross-validation, SVM achieved an accuracy of 80% for FA and 83% for MD. Accuracies for NB were significantly lower, ranging between 68% and 75%. Removing variance components arising from scanners using principal component analysis did not significantly change the classification results for both classifiers. For the scanner-specific cross-validation, the classification accuracy was reduced for both SVM and NB. After mean correction, classification accuracy reached a level comparable to the results obtained from the pooled cross-validation. Our findings support the notion that machine learning classification allows robust classification of DTI data sets arising from multiple scanners, even if a new data set comes from a scanner that was not part of the training sample

    A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials

    Get PDF
    Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in the last few decades as the direct and indirect methods available are laborious and time-consuming. The recent emergence of high-throughput plant phenotyping platforms has increased the need to develop new phenotyping tools for better decision-making by breeders. In this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue (RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The model mixes numerical data collected in a wheat breeding field and visual features extracted from the images to make rapid and accurate LAI estimations. Model-based LAI estimations were validated against LAI measurements determined non-destructively using an allometric relationship obtained in this study. The model performance was also compared with LAI estimates obtained by other classical indirect methods based on bottom-up hemispherical images and gaps fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The model performance was slightly better than that of the hemispherical image-based method, which tended to underestimate LAI. These results show the great potential of the developed model for near real-time LAI estimation, which can be further improved in the future by increasing the dataset used to train the model

    Customized CNN Model for Multiple Illness Identification in Rice and Maize

    Get PDF
    Crop diseases imperil global food security and economies, demanding early detection and effective management. Convolutional Neural Networks (CNNs), particularly in rice and maize leaf disease classification, have gained traction due to their automatic feature extraction capabilities. CNN models eliminate manual feature extraction, enabling precise disease diagnosis based on learned features. Researchers have rapidly advanced these models, achieving promising results. Leaf disease characteristics like color changes, texture variations, and lesion appearance have been identified as useful for automated diagnosis using machine learning. Developing CNN models involves crucial stages: dataset preparation, architecture selection, hyperparameter tuning, and model training and evaluation. Diverse and accurately annotated datasets are pivotal, and appropriate CNN architecture selection, such as ResNet101 and XceptionNet, ensures optimal performance. These architectures' pre-training on vast image datasets enhances feature extraction. Hyperparameter tuning fine-tunes the model, and training and evaluation gauge its precision. CNN models hold potential to enhance rice and maize productivity and global food security by effectively detecting and managing diseases

    Plant Disease Diagnosing Based on Deep Learning Techniques: A Survey and Research Challenges

    Get PDF
    Agriculture crops are highly significant for the sustenance of human life and act as an essential source for national income development worldwide. Plant diseases and pests are considered one of the most imperative factors influencing food production, quality, and minimize losses in production. Farmers are currently facing difficulty in identifying various plant diseases and pests, which are important to prevent plant diseases effectively in a complicated environment. The recent development of deep learning techniques has found use in the diagnosis of plant diseases and pests, providing a robust tool with highly accurate results. In this context, this paper presents a comprehensive review of the literature that aims to identify the state of the art of the use of convolutional neural networks (CNNs) in the process of diagnosing and identification of plant pest and diseases. In addition, it presents some issues that are facing the models performance, and also indicates gaps that should be addressed in the future. In this regard, we review studies with various methods that addressed plant disease detection, dataset characteristics, the crops, and pathogens. Moreover, it discusses the commonly employed five-step methodology for plant disease recognition, involving data acquisition, preprocessing, segmentation, feature extraction, and classification. It discusses various deep learning architecture-based solutions that have a faster convergence rate of plant disease recognition. From this review, it is possible to understand the innovative trends regarding the use of CNN’s algorithms in the plant diseases diagnosis and to recognize the gaps that need the attention of the research community

    Quantitative estimation of plant characteristics using spectral measurement: A survey of the literature

    Get PDF
    There are no author-identified significant results in this report

    Latent Dirichlet Allocation Uncovers Spectral Characteristics of Drought Stressed Plants

    Full text link
    Understanding the adaptation process of plants to drought stress is essential in improving management practices, breeding strategies as well as engineering viable crops for a sustainable agriculture in the coming decades. Hyper-spectral imaging provides a particularly promising approach to gain such understanding since it allows to discover non-destructively spectral characteristics of plants governed primarily by scattering and absorption characteristics of the leaf internal structure and biochemical constituents. Several drought stress indices have been derived using hyper-spectral imaging. However, they are typically based on few hyper-spectral images only, rely on interpretations of experts, and consider few wavelengths only. In this study, we present the first data-driven approach to discovering spectral drought stress indices, treating it as an unsupervised labeling problem at massive scale. To make use of short range dependencies of spectral wavelengths, we develop an online variational Bayes algorithm for latent Dirichlet allocation with convolved Dirichlet regularizer. This approach scales to massive datasets and, hence, provides a more objective complement to plant physiological practices. The spectral topics found conform to plant physiological knowledge and can be computed in a fraction of the time compared to existing LDA approaches.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012

    Energy Efficiency Prediction using Artificial Neural Network

    Get PDF
    Buildings energy consumption is growing gradually and put away around 40% of total energy use. Predicting heating and cooling loads of a building in the initial phase of the design to find out optimal solutions amongst different designs is very important, as ell as in the operating phase after the building has been finished for efficient energy. In this study, an artificial neural network model was designed and developed for predicting heating and cooling loads of a building based on a dataset for building energy performance. The main factors for input variables are: relative compactness, roof area, overall height, surface area, glazing are a, wall area, glazing area distribution of a building, orientation, and the output variables: heating and cooling loads of the building. The dataset used for training are the data published in the literature for various 768 residential buildings. The model was trained and validated, most important factors affecting heating load and cooling load are identified, and the accuracy for the validation was 99.60%

    Review of Leaf Unhealthy Region Detection Using Image Processing Techniques

    Get PDF
    Abstract- In agricultural  field the plants comes to an attack from the various pets bacterial and micro-organism diseases. This diseases  attacks on the plant leaves, steams, and fruit part. This present review paper discussed the image processing techniques which is used in performing the early detection of plant diseases through leaf feature inspection. the basic objective of this work is to develop image analysis and classification techniques for extraction and finally classified the diseases present on leaf. Image of leaf is captured  and the process is performed and to determine the status of each plant. Here proposed model  divide into four parts. The  image preprocessing including normalization and contrast adjustment; segment the region of interest  determine by using color transform YCbCr and bi-level thresholding for statistical usage to determine the defect and severity area of plant leaves. The texture feature extraction using statistical GLCM (Gray Level Co-occurrences Matrix)  and color feature by means values.[1] Finally classification achieved using random markov model
    • …
    corecore