1,215 research outputs found

    A machine learning approach for grain crop's seed classification in purifying separation

    Get PDF
    The paper presents a study of the machine learning ability to classify seeds of a grain crop in order to improve purification processing. The main seed features that are hard to separate with mechanical methods are resolved with the use of a machine learning approach. A special training image set was retrieved in order to check if the stated approach is reasonable to use. A set of tests is provided to show the effectiveness of the machine learning for the stated task. The ability to improve the approach with deep learning in further research is described

    A machine learning approach for grain crop's seed classification in purifying separation

    Get PDF
    The paper presents a study of the machine learning ability to classify seeds of a grain crop in order to improve purification processing. The main seed features that are hard to separate with mechanical methods are resolved with the use of a machine learning approach. A special training image set was retrieved in order to check if the stated approach is reasonable to use. A set of tests is provided to show the effectiveness of the machine learning for the stated task. The ability to improve the approach with deep learning in further research is described

    A computer vision system based on majority-voting ensemble neural network for the automatic classification of three chickpea varieties

    Get PDF
    Producción CientíficaSince different varieties of crops have specific applications, it is therefore important to properly identify each cultivar, in order to avoid fake varieties being sold as genuine, i.e., fraud. Despite that properly trained human experts might accurately identify and classify crop varieties, computer vision systems are needed since conditions such as fatigue, reproducibility, and so on, can influence the expert’s judgment and assessment. Chickpea (Cicer arietinum L.) is an important legume at the world-level and has several varieties. Three chickpea varieties with a rather similar visual appearance were studied here: Adel, Arman, and Azad chickpeas. The purpose of this paper is to present a computer vision system for the automatic classification of those chickpea varieties. First, segmentation was performed using an Hue Saturation Intensity (HSI) color space threshold. Next, color and textural (from the gray level co-occurrence matrix, GLCM) properties (features) were extracted from the chickpea sample images. Then, using the hybrid artificial neural network-cultural algorithm (ANN-CA), the sub-optimal combination of the five most effective properties (mean of the RGB color space components, mean of the HSI color space components, entropy of GLCM matrix at 90°, standard deviation of GLCM matrix at 0°, and mean third component in YCbCr color space) were selected as discriminant features. Finally, an ANN-PSO/ACO/HS majority voting (MV) ensemble methodology merging three different classifier outputs, namely the hybrid artificial neural network-particle swarm optimization (ANN-PSO), hybrid artificial neural network-ant colony optimization (ANN-ACO), and hybrid artificial neural network-harmonic search (ANN-HS), was used. Results showed that the ensemble ANN-PSO/ACO/HS-MV classifier approach reached an average classification accuracy of 99.10 ± 0.75% over the test set, after averaging 1000 random iterations.Unión Europea (project 585596-EPP-1-2017-1-DE-EPPKA2-CBHE-JP

    Rice Seed Varieties Identification based on Extracted Colour Features using Image Processing and Artificial Neural Network (ANN)

    Get PDF
    Determination of rice seed varieties is very important to ensure varietal purity in the production of high-quality seed. To date, manual seed inspection is carried out to separate foreign rice seed varieties in rice seed sample in the laboratory as there is lack of an automatic seed classification system.  This paper describes a simple approach of using image processing technique and artificial neural network (ANN) to determine rice seed varieties based on extracted colour features of individual seed images. The experiment was conducted using 200 individual seed images of two Malaysian rice seed varieties namely MR 219 and MR 269. The acquired seed images were processed using a set of image processing procedure to enhance the image quality. Colour feature extraction was carried out to extract the red (R), green (G), blue (B), hue (H), saturation (S), value (V) and intensity (I) levels of the individual seed images. The classification using ANN was carried out by dividing the data sets into training (70% of data), validation (15%) and testing (15%) dataset respectively. The best ANN model to determine the rice seed varieties was developed, and the accuracy levels of the classification results were 67.5% and 76.7% for testing and training data sets using 40 hidden neurons

    Yield and Quality Prediction of Winter Rapeseed — Artificial Neural Network and Random Forest Models

    Get PDF
    open7siThis research was funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia, grant numbers 451-03-9/2021-14/200051, 451-03-9/2021-14/200134, 451-03-68/2020-14/ 200032 and 451-03-9/2021-14/200032.As one of the greatest agricultural challenges, yield prediction is an important issue for producers, stakeholders, and the global trade market. Most of the variation in yield is attributed to environmental factors such as climate conditions, soil type and cultivation practices. Artificial neural networks (ANNs) and random forest regression (RFR) are machine learning tools that are used unambiguously for crop yield prediction. There is limited research regarding the application of these mathematical models for the prediction of rapeseed yield and quality. A four-year study (2015–2018) was carried out in the Republic of Serbia with 40 winter rapeseed genotypes. The field trial was designed as a randomized complete block design in three replications. ANN, based on the Broyden–Fletcher–Goldfarb–Shanno iterative algorithm, and RFR models were used for prediction of seed yield, oil and protein yield, oil and protein content, and 1000 seed weight, based on the year of production and genotype. The best production year for rapeseed cultivation was 2016, when the highest seed and oil yield were achieved, 2994 kg/ha and 1402 kg/ha, respectively. The RFR model showed better prediction capabilities compared to the ANN model (the r2 values for prediction of output variables were 0.944, 0.935, 0.912, 0.886, 0.936 and 0.900, for oil and protein content, seed yield, 1000 seed weight, oil and protein yield, respectively).openRajkovic D.; Jeromela A.M.; Pezo L.; Loncar B.; Zanetti F.; Monti A.; Spika A.K.Rajkovic D.; Jeromela A.M.; Pezo L.; Loncar B.; Zanetti F.; Monti A.; Spika A.K

    Quality Estimation of Canola Using Machine Vision and Vis-nir Spectroscopy

    Get PDF
    Canola is mainly graded either by visual inspection or by smelling. These methods are subjective in nature and are bound to cause errors while deciding the grade of canola. To test canola for amount of erucic acid present the sample needs to be sent to a laboratory for testing through wet chemical analysis. This is a time consuming process. An electronic method that can quantify amount of dockage, presence of distinctly green and heat treated seeds, distinguish samples on the basis of erucic acid, its free fatty acid content and PV, would not only be less time consuming but also would be a more reliable method to grade canola samples. Findings and Conclusions: 1. Canola samples cannot be classified on the basis of total dockage present using L and RGB data obtained from flat-bed scanner. Inclusion of morphological and textural features would improve the classification accuracy. 2. Machine vision can be considered as a potential method to grade canola on the basis of good, distinctly green and heat damagedBiosystems and Agricultural Engineerin

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures

    Get PDF
    © The Author(s) 2019. Published by Oxford University Press. BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever

    PST: Plant segmentation transformer for 3D point clouds of rapeseed plants at the podding stage

    Full text link
    Segmentation of plant point clouds to obtain high-precise morphological traits is essential for plant phenotyping. Although the fast development of deep learning has boosted much research on segmentation of plant point clouds, previous studies mainly focus on the hard voxelization-based or down-sampling-based methods, which are limited to segmenting simple plant organs. Segmentation of complex plant point clouds with a high spatial resolution still remains challenging. In this study, we proposed a deep learning network plant segmentation transformer (PST) to achieve the semantic and instance segmentation of rapeseed plants point clouds acquired by handheld laser scanning (HLS) with the high spatial resolution, which can characterize the tiny siliques as the main traits targeted. PST is composed of: (i) a dynamic voxel feature encoder (DVFE) to aggregate the point features with the raw spatial resolution; (ii) the dual window sets attention blocks to capture the contextual information; and (iii) a dense feature propagation module to obtain the final dense point feature map. The results proved that PST and PST-PointGroup (PG) achieved superior performance in semantic and instance segmentation tasks. For the semantic segmentation, the mean IoU, mean Precision, mean Recall, mean F1-score, and overall accuracy of PST were 93.96%, 97.29%, 96.52%, 96.88%, and 97.07%, achieving an improvement of 7.62%, 3.28%, 4.8%, 4.25%, and 3.88% compared to the second-best state-of-the-art network PAConv. For instance segmentation, PST-PG reached 89.51%, 89.85%, 88.83% and 82.53% in mCov, mWCov, mPerc90, and mRec90, achieving an improvement of 2.93%, 2.21%, 1.99%, and 5.9% compared to the original PG. This study proves that the deep-learning-based point cloud segmentation method has a great potential for resolving dense plant point clouds with complex morphological traits.Comment: 46 pages, 10 figure

    Recent results in the development of band steaming for intra-row weed control

    Get PDF
    The recent achievements with developing band-steaming techniques for intra-row weed control in vegetables are presente
    corecore