455 research outputs found

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    An embedded system for the automated generation of labeled plant images to enable machine learning applications in agriculture

    Get PDF
    A lack of sufficient training data, both in terms of variety and quantity, is often the bottleneck in the development of machine learning (ML) applications in any domain. For agricultural applications, ML-based models designed to perform tasks such as autonomous plant classification will typically be coupled to just one or perhaps a few plant species. As a consequence, each crop-specific task is very likely to require its own specialized training data, and the question of how to serve this need for data now often overshadows the more routine exercise of actually training such models. To tackle this problem, we have developed an embedded robotic system to automatically generate and label large datasets of plant images for ML applications in agriculture. The system can image plants from virtually any angle, thereby ensuring a wide variety of data; and with an imaging rate of up to one image per second, it can produce lableled datasets on the scale of thousands to tens of thousands of images per day. As such, this system offers an important alternative to time- and cost-intensive methods of manual generation and labeling. Furthermore, the use of a uniform background made of blue keying fabric enables additional image processing techniques such as background replacement and plant segmentation. It also helps in the training process, essentially forcing the model to focus on the plant features and eliminating random correlations. To demonstrate the capabilities of our system, we generated a dataset of over 34,000 labeled images, with which we trained an ML-model to distinguish grasses from non-grasses in test data from a variety of sources. We now plan to generate much larger datasets of Canadian crop plants and weeds that will be made publicly available in the hope of further enabling ML applications in the agriculture sector.Comment: 35 pages, 8 figures, Preprint submitted to PLoS On

    A W-Shaped Convolutional Network for Robust Crop and Weed Classification in Agriculture

    Get PDF
    Agricultural image and vision computing are significantly different from other object classification-based methods because two base classes in agriculture, crops and weeds, have many common traits. Efficient crop, weeds, and soil classification are required to perform autonomous (spraying, harvesting, etc.) activities in agricultural fields. In a three-class (crop-weed-background) agricultural classification scenario, it is usually easier to accurately classify the background class than the crop and weed classes because the background class appears significantly different feature-wise than the crop and weed classes. However, robustly distinguishing between the crop and weed classes is challenging because their appearance features generally look very similar. To address this problem, we propose a framework based on a convolutional W-shaped network with two encoder-decoder structures of different sizes. The first encoder-decoder structure differentiates between background and vegetation (crop and weed), and the second encoder-decoder structure learns discriminating features to classify crop and weed classes efficiently. The proposed W network is generalizable for different crop types. The effectiveness of the proposed network is demonstrated on two crop datasets – a tobacco dataset and a sesame dataset, both collected in this study and made available publicly online for use by the community – by evaluating and comparing the performance with existing related methods. The proposed method consistently outperforms existing related methods on both datasets

    Sensors in agriculture and forestry

    Get PDF
    Agriculture and Forestry are two broad and promising areas demanding technological solutions with the aim of increasing production or accurate inventories for sustainability while the environmental impact is minimized by reducing the application of agro-chemicals and increasing the use of environmental friendly agronomical practices. In addition, the immediate consequence of this “trend” is the reduction of production costs. Sensors-based technologies provide appropriate tools to achieve the above mentioned goals. The explosive technological advances and development in recent years enormously facilitates the attainment of these objectives removing many barriers for their implementation, including the reservations expressed by the farmers themselves. Precision Agriculture is an emerging area where sensor-based technologies play an important role.RHEA project [42], which is funded by the European Union’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement NO.245986, which has been the platform for the two international conferences on Robotics and associated High-technologies and Equipment mentioned above.Peer Reviewe

    Leveraging Image Analysis for High-Throughput Plant Phenotyping

    Get PDF
    The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant’s phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field

    Modern Seed Technology

    Get PDF
    Satisfying the increasing number of consumer demands for high-quality seeds with enhanced performance is one of the most imperative challenges of modern agriculture. In this view, it is essential to remember that the seed quality of crops does not improve

    Quantifying corn emergence using UAV imagery and machine learning

    Get PDF
    Corn (Zea mays L.) is one of the important crops in the United States for animal feed, ethanol production, and human consumption. To maximize the final corn yield, one of the critical factors to consider is to improve the corn emergence uniformity temporally (emergence date) and spatially (plant spacing). Conventionally, the assessment of emergence uniformity usually is performed through visual observation by farmers at selected small plots to represent the whole field, but this is limited by time and labor needed. With the advance of unmanned aerial vehicle (UAV)-based imaging technology and advanced image processing techniques powered by machine learning (ML) and deep learning (DL), a more automatic, non-subjective, precise, and accurate field-scale assessment of emergence uniformity becomes possible. Previous studies had demonstrated the success of crop emergence uniformity using UAV imagery, specifically at fields with simple soil background. There is no research having investigated the feasibility of UAV imagery in the corn emergence assessment at fields of conservation agriculture that are covered with cover crops or residues to improve soil health and sustainability. The overall goal of this research was to develop a fast and accurate method for the assessment of corn emergence using UAV imagery, ML and DL techniques. The pertinent information is essential for corn production early and in-season decision making as well as agronomy research. The research comprised three main studies, including Study 1: quantifying corn emergence date using UAV imagery and a ML model; Study 2: estimating corn stand count in different cropping systems (CS) using UAV images and DL; and Study 3: estimating and mapping corn emergence under different planting depths. Two case studies extended Study 3 to field-scale applications by relating emergence uniformity derived from the developed method to planting depths treatments and estimating final yield. For all studies, the primary imagery data were collected using a consumer-grade UAV equipped with a red-green-blue (RGB) camera at a flight height of approximate 10 m above ground level. The imagery data had a ground sampling distance (GSD) of 0.55 - 3.00 mm pixel-1 that was sufficient to detect small size seedlings. In addition, a UAV multispectral camera was used to capture corn plants at early growth stages (V4, V6, and V7) in case studies to extract plant reflectance (vegetation indices, VIs) as plant growth variation indicators. Random forest (RF) ML models were used to classify the corn emergence date based on the days after emergence (DAE) to time of assessment and estimate yield. The DL models, U-Net and ResNet18, were used to segment corn seedlings from UAV images and estimate emergence parameters, including plant density, average DAE (DAEmean), and plant spacing standard deviation (PSstd), respectively. Results from Study 1 indicated that individual corn plant quantification using UAV imagery and a RF ML model achieved moderate classification accuracies of 0.20 - 0.49 that increased to 0.55 - 0.88 when DAE classification was expanded to be within a 3-day window. In Study 2, the precision for image segmentation by the U-Net model was [greater than or equal to] 0.81 for all CS, resulting in high accuracies in estimating plant density (R2 [greater than or equal to] 0.92; RMSE [less than or equal to] 0.48 plants m-1). Then, the ResNet18 model in Study 3 was able to estimate emergence parameters with high accuracies (0.97, 0.95, and 0.73 for plant density, DAEmean, and PSstd, respectively). Case studies showed that crop emergence maps and evaluation in field conditions indicated an expected trend of decreasing plant density and DAEmean with increasing planting depths and opposite results for PSstd. However, mixed trends were found for emergence parameters among planting depths at different replications and across the N-S direction of the fields. For yield estimation, emergence data alone did not show any relation with final yield (R2 = 0.01, RMSE = 720 kg ha-1). The combination of VIs from all the growth stages was only able to estimate yield with R2 of 0.34 and RMSE of 560 kg ha-1. In summary, this research demonstrated the success of UAV imagery and ML/DL techniques in assessing and mapping corn emergence at fields practicing all or some components of conservation agriculture. The findings give more insights for future agronomic and breeding studies in providing field-scale crop emergence evaluations as affected by treatments and management as well as relating emergence assessment to final yield. In addition, these emergence evaluations may be useful for commercial companies when needing justification for developing new technologies relating to precision planting to crop performance. For commercial crop production, more comprehensive emergence maps (in terms of temporal and spatial uniformity) will help to make better replanting or early management decisions. Further enhancement of the methods such as more validation studies in different locations and years as well as development of interactive frameworks will establish a more automatic, robust, precise, accurate, and 'ready-to-use' approach in estimating and mapping crop emergence uniformity.Includes bibliographical references

    GrapeNet: A Lightweight Convolutional Neural Network Model for Identification of Grape Leaf Diseases

    Get PDF
    Most convolutional neural network (CNN) models have various difficulties in identifying crop diseases owing to morphological and physiological changes in crop tissues, and cells. Furthermore, a single crop disease can show different symptoms. Usually, the differences in symptoms between early crop disease and late crop disease stages include the area of disease and color of disease. This also poses additional difficulties for CNN models. Here, we propose a lightweight CNN model called GrapeNet for the identification of different symptom stages for specific grape diseases. The main components of GrapeNet are residual blocks, residual feature fusion blocks (RFFBs), and convolution block attention modules. The residual blocks are used to deepen the network depth and extract rich features. To alleviate the CNN performance degradation associated with a large number of hidden layers, we designed an RFFB module based on the residual block. It fuses the average pooled feature map before the residual block input and the high-dimensional feature maps after the residual block output by a concatenation operation, thereby achieving feature fusion at different depths. In addition, the convolutional block attention module (CBAM) is introduced after each RFFB module to extract valid disease information. The obtained results show that the identification accuracy was determined as 82.99%, 84.01%, 82.74%, 84.77%, 80.96%, 82.74%, 80.96%, 83.76%, and 86.29% for GoogLeNet, Vgg16, ResNet34, DenseNet121, MobileNetV2, MobileNetV3_large, ShuffleNetV2_Ă—1.0, EfficientNetV2_s, and GrapeNet. The GrapeNet model achieved the best classification performance when compared with other classical models. The total number of parameters of the GrapeNet model only included 2.15 million. Compared with DenseNet121, which has the highest accuracy among classical network models, the number of parameters of GrapeNet was reduced by 4.81 million, thereby reducing the training time of GrapeNet by about two times compared with that of DenseNet121. Moreover, the visualization results of Grad-cam indicate that the introduction of CBAM can emphasize disease information and suppress irrelevant information. The overall results suggest that the GrapeNet model is useful for the automatic identification of grape leaf diseases

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    Quantification of virus syndrome in chili peppers

    Get PDF
    One of the most important problems to produce chili crops is the presence of diseases caused by pathogen agents, such as viruses, therefore, there is a substantial necessity to better predict the behavior of the diseases of these crops, determining a more precise quantification of the disease’s syndrome that allows the investigators to evaluate better practices, from handling to the experimental level and will permit producers to take opportunistic corrective action thereby, reducing production loses and increasing the quality of the crop. This review discussed methods that have been used for the quantification of disease in plants, specifically for chili peppers crops, thereby, suggesting a better alternative for the quantification of the disease’ syndromes in regards to this crop. The result of these reflections indicates that most methods used for quantification are based on visual assessments, discarding differences of data between distinctive evaluators. These methods generate subjective results.Key words: Quantification, plant diseases, severity, syndrome, viruses
    • …
    corecore