112 research outputs found

    Computational intelligence image processing for precision farming on-site nitrogen analysis in plants

    Get PDF
    PhD ThesisNitrogen is one of the macronutrients which is essentially required by plants. To support the precision farming, it is important to analyse nitrogen status in plants in order to prevent excessive fertilisation as well as to reduce production costs. Image-based analysis has been widely utilised to estimate nitrogen content in plants. Such research, however, is commonly conducted in a controlled environment with artificial lighting systems. This thesis proposes three novel computational intelligence systems to evaluate nitrogen status in wheat plants by analysing plant images captured on field and are subject to variation in lighting conditions. In the first proposed method, a fusion of regularised neural networks (NN) has been employed to normalise plant images based on the RGB colour of the 24-patch Macbeth colour checker. The colour normalisation results are then optimised using genetic algorithm (GA). The regularised neural network has also been effectively utilised to distinguish wheat leaves from other unwanted parts. This method gives improved results compared to the Otsu algorithm. Furthermore, several neural networks with different number of hidden layer nodes are combined using committee machines and optimised by GA to estimate nitrogen content. In the second proposed method, the utilisation of regularised NN has been replaced by deep sparse extreme learning machine (DSELM). In general the utilisation of DSELM in the three research steps is as effective as that of the developed regularised NN as proposed in the first method. However, the learning speed of DSELM is extremely faster than the regularised NN and the standard backpropagation multilayer perceptron (MLP). In the third proposed method, a novel approach has been developed to fine tune the colour normalisation based on the nutrient estimation errors and analyse the effect of genetic algorithm based global optimisation on the nitrogen estimation results. In this method, an ensemble of deep learning MLP (DL-MLP) has been employed in the three research steps, i.e. colour normalisation, image segmentation and nitrogen estimation. The performance of the three proposed methods has been compared with the intrusive SPAD meter and the results show that all the proposed methods are superior to the SPAD based estimation. The nutrient estimation errors of the proposed methods are less than 3%, while the error using the renowned SPAD meter method is 8.48%. As a comparison, nitrogen prediction using other methods, i.e. Kawashima greenness index () and PCA-based greenness index () are also calculated. The prediction errors by means of and methods are 9.84% and 9.20%, respectively.Indonesia Ministry of Research, Technology and Higher Education and Jenderal Soedirman Univerist

    Designing a fruit identification algorithm in orchard conditions to develop robots using video processing and majority voting based on hybrid artificial neural network

    Get PDF
    The first step in identifying fruits on trees is to develop garden robots for different purposes such as fruit harvesting and spatial specific spraying. Due to the natural conditions of the fruit orchards and the unevenness of the various objects throughout it, usage of the controlled conditions is very difficult. As a result, these operations should be performed in natural conditions, both in light and in the background. Due to the dependency of other garden robot operations on the fruit identification stage, this step must be performed precisely. Therefore, the purpose of this paper was to design an identification algorithm in orchard conditions using a combination of video processing and majority voting based on different hybrid artificial neural networks. The different steps of designing this algorithm were: (1) Recording video of different plum orchards at different light intensities; (2) converting the videos produced into its frames; (3) extracting different color properties from pixels; (4) selecting effective properties from color extraction properties using hybrid artificial neural network-harmony search (ANN-HS); and (5) classification using majority voting based on three classifiers of artificial neural network-bees algorithm (ANN-BA), artificial neural network-biogeography-based optimization (ANN-BBO), and artificial neural network-firefly algorithm (ANN-FA). Most effective features selected by the hybrid ANN-HS consisted of the third channel in hue saturation lightness (HSL) color space, the second channel in lightness chroma hue (LCH) color space, the first channel in L*a*b* color space, and the first channel in hue saturation intensity (HSI). The results showed that the accuracy of the majority voting method in the best execution and in 500 executions was 98.01% and 97.20%, respectively. Based on different performance evaluation criteria of the classifiers, it was found that the majority voting method had a higher performance.European Union (EU) under Erasmus+ project entitled “Fostering Internationalization in Agricultural Engineering in Iran and Russia” [FARmER] with grant number 585596-EPP-1-2017-1-DE-EPPKA2-CBHE-JPinfo:eu-repo/semantics/publishedVersio

    Digital phenotyping and genotype-to-phenotype (G2P) models to predict complex traits in cereal crops

    Get PDF
    The revolution in digital phenotyping combined with the new layers of omics and envirotyping tools offers great promise to improve selection and accelerate genetic gains for crop improvement. This chapter examines the latest methods involving digital phenotyping tools to predict complex traits in cereals crops. The chapter has two parts. In the first part, entitled “Digital phenotyping as a tool to support breeding programs”, the secondary phenotypes measured by high-throughput plant phenotyping that are potentially useful for breeding are reviewed. In the second part, “Implementing complex G2P models in breeding programs”, the integration of data from digital phenotyping into genotype to phenotype (G2P) models to improve the prediction of complex traits using genomic information is discussed. The current status of statistical models to incorporate secondary traits in univariate and multivariate models, as well as how to better handle longitudinal (for example light interception, biomass accumulation, canopy height) traits, is reviewe

    A comprehensive review of crop yield prediction using machine learning approaches with special emphasis on palm oil yield prediction

    Get PDF
    An early and reliable estimation of crop yield is essential in quantitative and financial evaluation at the field level for determining strategic plans in agricultural commodities for import-export policies and doubling farmer’s incomes. Crop yield predictions are carried out to estimate higher crop yield through the use of machine learning algorithms which are one of the challenging issues in the agricultural sector. Due to this developing significance of crop yield prediction, this article provides an exhaustive review on the use of machine learning algorithms to predict crop yield with special emphasis on palm oil yield prediction. Initially, the current status of palm oil yield around the world is presented, along with a brief discussion on the overview of widely used features and prediction algorithms. Then, the critical evaluation of the state-of-the-art machine learning-based crop yield prediction, machine learning application in the palm oil industry and comparative analysis of related studies are presented. Consequently, a detailed study of the advantages and difficulties related to machine learning-based crop yield prediction and proper identification of current and future challenges to the agricultural industry is presented. The potential solutions are additionally prescribed in order to alleviate existing problems in crop yield prediction. Since one of the major objectives of this study is to explore the future perspectives of machine learning-based palm oil yield prediction, the areas including application of remote sensing, plant’s growth and disease recognition, mapping and tree counting, optimum features and algorithms have been broadly discussed. Finally, a prospective architecture of machine learning-based palm oil yield prediction has been proposed based on the critical evaluation of existing related studies. This technology will fulfill its promise by performing new research challenges in the analysis of crop yield prediction and the development

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    Wavelength Selection Method Based on Partial Least Square from Hyperspectral Unmanned Aerial Vehicle Orthomosaic of Irrigated Olive Orchards

    Get PDF
    Identifying and mapping irrigated areas is essential for a variety of applications such as agricultural planning and water resource management. Irrigated plots are mainly identified using supervised classification of multispectral images from satellite or manned aerial platforms. Recently, hyperspectral sensors on-board Unmanned Aerial Vehicles (UAV) have proven to be useful analytical tools in agriculture due to their high spectral resolution. However, few efforts have been made to identify which wavelengths could be applied to provide relevant information in specific scenarios. In this study, hyperspectral reflectance data from UAV were used to compare the performance of several wavelength selection methods based on Partial Least Square (PLS) regression with the purpose of discriminating two systems of irrigation commonly used in olive orchards. The tested PLS methods include filter methods (Loading Weights, Regression Coefficient and Variable Importance in Projection); Wrapper methods (Genetic Algorithm-PLS, Uninformative Variable Elimination-PLS, Backward Variable Elimination-PLS, Sub-window Permutation Analysis-PLS, Iterative Predictive Weighting-PLS, Regularized Elimination Procedure-PLS, Backward Interval-PLS, Forward Interval-PLS and Competitive Adaptive Reweighted Sampling-PLS); and an Embedded method (Sparse-PLS). In addition, two non-PLS based methods, Lasso and Boruta, were also used. Linear Discriminant Analysis and nonlinear K-Nearest Neighbors techniques were established for identification and assessment. The results indicate that wavelength selection methods, commonly used in other disciplines, provide utility in remote sensing for agronomical purposes, the identification of irrigation techniques being one such example. In addition to the aforementioned, these PLS and non-PLS based methods can play an important role in multivariate analysis, which can be used for subsequent model analysis. Of all the methods evaluated, Genetic Algorithm-PLS and Boruta eliminated nearly 90% of the original spectral wavelengths acquired from a hyperspectral sensor onboard a UAV while increasing the identification accuracy of the classification

    Crop Disease Detection Using Remote Sensing Image Analysis

    Get PDF
    Pest and crop disease threats are often estimated by complex changes in crops and the applied agricultural practices that result mainly from the increasing food demand and climate change at global level. In an attempt to explore high-end and sustainable solutions for both pest and crop disease management, remote sensing technologies have been employed, taking advantages of possible changes deriving from relative alterations in the metabolic activity of infected crops which in turn are highly associated to crop spectral reflectance properties. Recent developments applied to high resolution data acquired with remote sensing tools, offer an additional tool which is the opportunity of mapping the infected field areas in the form of patchy land areas or those areas that are susceptible to diseases. This makes easier the discrimination between healthy and diseased crops, providing an additional tool to crop monitoring. The current book brings together recent research work comprising of innovative applications that involve novel remote sensing approaches and their applications oriented to crop disease detection. The book provides an in-depth view of the developments in remote sensing and explores its potential to assess health status in crops

    On the Use of Imaging Spectroscopy from Unmanned Aerial Systems (UAS) to Model Yield and Assess Growth Stages of a Broadacre Crop

    Get PDF
    Snap bean production was valued at $363 million in 2018. Moreover, the increasing need in food production, caused by the exponential increase in population, makes this crop vitally important to study. Traditionally, harvest time determination and yield prediction are performed by collecting limited number of samples. While this approach could work, it is inaccurate, labor-intensive, and based on a small sample size. The ambiguous nature of this approach furthermore leaves the grower with under-ripe and over-mature plants, decreasing the final net profit and the overall quality of the product. A more cost-effective method would be a site-specific approach that would save time and labor for farmers and growers, while providing them with exact detail to when and where to harvest and how much is to be harvested (while forecasting yield). In this study we used hyperspectral (i.e., point-based and image-based), as well as biophysical data, to identify spectral signatures and biophysical attributes that could schedule harvest and forecast yield prior to harvest. Over the past two decades, there have been immense advances in the field of yield and harvest modeling using remote sensing data. Nevertheless, there still exists a wide gap in the literature covering yield and harvest assessment as a function of time using both ground-based and unmanned aerial systems. There is a need for a study focusing on crop-specific yield and harvest assessment using a rapid, affordable system. We hypothesize that a down-sampled multispectral system, tuned with spectral features identified from hyperspectral data, could address the mentioned gaps. Moreover, we hypothesize that the airborne data will contain noise that could negatively impact the performance and the reliability of the utilized models. Thus, We address these knowledge gaps with three objectives as below: 1. Assess yield prediction of snap bean crop using spectral and biophysical data and identify discriminating spectral features via statistical and machine learning approaches. 2. Evaluate snap bean harvest maturity at both the plant growth stage and pod maturity level, by means of spectral and biophysical indicators, and identify the corresponding discriminating spectral features. 3. Assess the feasibility of using a deep learning architecture for reducing noise in the hyperspectral data. In the light of the mentioned objectives, we carried out a greenhouse study in the winter and spring of 2019, where we studied temporal change in spectra and physical attributes of snap-bean crop, from Huntington cultivar, using a handheld spectrometer in the visible- to shortwave-infrared domain (400-2500 nm). Chapter 3 of this dissertation focuses on yield assessment of the greenhouse study. Findings from this best-case scenario yield study showed that the best time to study yield is approximately 20-25 days prior to harvest that would give out the most accurate yield predictions. The proposed approach was able to explain variability as high as R2 = 0.72, with spectral features residing in absorption regions for chlorophyll, protein, lignin, and nitrogen, among others. The captured data from this study contained minimal noise, even in the detector fall-off regions. Moving the focus to harvest maturity assessment, Chapter 4 presents findings from this objective in the greenhouse environment. Our findings showed that four stages of maturity, namely vegetative growth, budding, flowering, and pod formation, are distinguishable with 79% and 78% accuracy, respectively, via the two introduced vegetation indices, as snap-bean growth index (SGI) and normalized difference snap-bean growth index (NDSI), respectively. Moreover, pod-level maturity classification showed that ready-to-harvest and not-ready-to-harvest pods can be separated with 78% accuracy with identified wavelengths residing in green, red edge, and shortwave-infrared regions. Moreover, Chapters 5 and 6 focus on transitioning the learned concepts from the mentioned greenhouse scenario to UAS domain. We transitioned from a handheld spectrometer in the visible to short-wave infrared domain (400-2500 nm) to a UAS-mounted hyperspectral imager in the visible-to-near-infrared region (400-1000 nm). Two years worth of data, at two different geographical locations, were collected in upstate New York and examined for yield modeling and harvest scheduling objectives. For analysis of the collected data, we introduced a feature selection library in Python, named “Jostar”, to identify the most discriminating wavelengths. The findings from the yield modeling UAS study show that pod weight and seed length, as two different yield indicators, can be explained with R2 as high as 0.93 and 0.98, respectively. Identified wavelengths resided in blue, green, red, and red edge regions, and 44-55 days after planting (DAP) showed to be the optimal time for yield assessment. Chapter 6, on the other hand, evaluates maturity assessment, in terms of pod classification, from the UAS perspective. Results from this study showed that the identified features resided in blue, green, red, and red-edge regions, contributing to F1 score as high as 0.91 for differentiating between ready-to-harvest vs. not ready-to-harvest. The identified features from this study is in line with those detected from the UAS yield assessment study. In order to have a parallel comparison of the greenhouse study against the UAS study, we adopted the methodology employed for UAS studies and applied it to the greenhouse studies, in Chapter 7. Since the greenhouse data were captured in the visible-to-shortwave-infrared (400-2500 nm) domain, and the UAS study data were captured in the VNIR (400-1000 nm) domain, we truncated the spectral range of the collected data from the greenhouse study to the VNIR domain. The comparison experiment between the greenhouse study and the UAS studies for yield assessment, at two harvest stages early and late, showed that spectral features in 450-470, 500-520, 650, 700-730 nm regions were repeated on days with highest coefficient of determination. Moreover, 46-48 DAP with high coefficient of determination for yield prediction were repeated in five out of six data sets (two early stages, each three data sets). On the other hand, the harvest maturity comparison between the greenhouse study and the UAS data sets showed that similar identified wavelengths reside in ∼450, ∼530, ∼715, and ∼760 nm regions, with performance metric (F1 score) of 0.78, 0.84, and 0.9 for greenhouse, 2019 UAS, and 2020 UAS data, respectively. However, the incorporated noise in the captured data from the UAS study, along with the high computational cost of the classical mathematical approach employed for denoising hyperspectral data, have inspired us to leverage the computational performance of hyperspectral denoising by assessing the feasibility of transferring the learned concepts to deep learning models. In Chapter 8, we approached hyperspectral denoising in spectral domain (1D fashion) for two types of noise, integrated noise and non-independent and non-identically distributed (non-i.i.d.) noise. We utilized Memory Networks due to their power in image denoising for hyperspectral denoising, introduced a new loss and benchmarked it against several data sets and models. The proposed model, HypeMemNet, ranked first - up to 40% in terms of signal-to-noise ratio (SNR) for resolving integrated noise, and first or second, by a small margin for resolving non-i.i.d. noise. Our findings showed that a proper receptive field and a suitable number of filters are crucial for denoising integrated noise, while parameter size was shown to be of the highest importance for non-i.i.d. noise. Results from the conducted studies provide a comprehensive understanding encompassing yield modeling, harvest scheduling, and hyperspectral denoising. Our findings bode well for transitioning from an expensive hyperspectral imager to a multispectral imager, tuned with the identified bands, as well as employing a rapid deep learning model for hyperspectral denoising

    Deep Learning based Automatic Multi-Class Wild Pest Monitoring Approach using Hybrid Global and Local Activated Features with Stationary Trap Devices

    Get PDF
    Specialized control of pests and diseases have been a high-priority issue for agriculture industry in many countries. On account of automation and cost-effectiveness, image analytic based pest recognition systems are widely utilized in practical crops prevention applications. But due to powerless handcrafted features, current image analytic approaches achieve low accuracy and poor robustness in practical large-scale multi-class pest detection and recognition. To tackle this problem, this paper proposes a novel deep learning based automatic approach using hybrid and local activated features for pest monitoring solution. In the presented method, we exploit the global information from feature maps to build our Global activated Feature Pyramid Network (GaFPN) to extract pests highly discriminative features across various scales over both depth and position levels. It makes changes of depth or spatial sensitive features in pest images more visible during downsampling. Next, an improved pest localization module named Local activated Region Proposal Network (LaRPN) is proposed to find the precise pest objects positions by augmenting contextualized and attentional information for feature completion and enhancement in local level. The approach is evaluated on our 7-year large-scale pest dataset containing 88.6K images (16 types of pests) with 582.1K manually labelled pest objects. The experimental results show that our solution performs over 74.24% mAP in industrial circumstances, which outweighs two other state-of-the-art methods: Faster R-CNN [12] with mAP up to 70% and FPN [13] mAP up to 72%. Our code and dataset will be made publicly available

    Disruptive Technologies in Agricultural Operations: A Systematic Review of AI-driven AgriTech Research

    Get PDF
    YesThe evolving field of disruptive technologies has recently gained significant interest in various industries, including agriculture. The fourth industrial revolution has reshaped the context of Agricultural Technology (AgriTech) with applications of Artificial Intelligence (AI) and a strong focus on data-driven analytical techniques. Motivated by the advances in AgriTech for agrarian operations, the study presents a state-of-the-art review of the research advances which are, evolving in a fast pace over the last decades (due to the disruptive potential of the technological context). Following a systematic literature approach, we develop a categorisation of the various types of AgriTech, as well as the associated AI-driven techniques which form the continuously shifting definition of AgriTech. The contribution primarily draws on the conceptualisation and awareness about AI-driven AgriTech context relevant to the agricultural operations for smart, efficient, and sustainable farming. The study provides a single normative reference for the definition, context and future directions of the field for further research towards the operational context of AgriTech. Our findings indicate that AgriTech research and the disruptive potential of AI in the agricultural sector are still in infancy in Operations Research. Through the systematic review, we also intend to inform a wide range of agricultural stakeholders (farmers, agripreneurs, scholars and practitioners) and to provide research agenda for a growing field with multiple potentialities for the future of the agricultural operations
    corecore