78 research outputs found

    Crop Disease Detection Using Remote Sensing Image Analysis

    Get PDF
    Pest and crop disease threats are often estimated by complex changes in crops and the applied agricultural practices that result mainly from the increasing food demand and climate change at global level. In an attempt to explore high-end and sustainable solutions for both pest and crop disease management, remote sensing technologies have been employed, taking advantages of possible changes deriving from relative alterations in the metabolic activity of infected crops which in turn are highly associated to crop spectral reflectance properties. Recent developments applied to high resolution data acquired with remote sensing tools, offer an additional tool which is the opportunity of mapping the infected field areas in the form of patchy land areas or those areas that are susceptible to diseases. This makes easier the discrimination between healthy and diseased crops, providing an additional tool to crop monitoring. The current book brings together recent research work comprising of innovative applications that involve novel remote sensing approaches and their applications oriented to crop disease detection. The book provides an in-depth view of the developments in remote sensing and explores its potential to assess health status in crops

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    High-throughput estimation of crop traits: A review of ground and aerial phenotyping platforms

    Get PDF
    Crop yields need to be improved in a sustainable manner to meet the expected worldwide increase in population over the coming decades as well as the effects of anticipated climate change. Recently, genomics-assisted breeding has become a popular approach to food security; in this regard, the crop breeding community must better link the relationships between the phenotype and the genotype. While high-throughput genotyping is feasible at a low cost, highthroughput crop phenotyping methods and data analytical capacities need to be improved. High-throughput phenotyping offers a powerful way to assess particular phenotypes in large-scale experiments, using high-tech sensors, advanced robotics, and imageprocessing systems to monitor and quantify plants in breeding nurseries and field experiments at multiple scales. In addition, new bioinformatics platforms are able to embrace large-scale, multidimensional phenotypic datasets. Through the combined analysis of phenotyping and genotyping data, environmental responses and gene functions can now be dissected at unprecedented resolution. This will aid in finding solutions to currently limited and incremental improvements in crop yields

    Remote Sensing in Agriculture: State-of-the-Art

    Get PDF
    The Special Issue on “Remote Sensing in Agriculture: State-of-the-Art” gives an exhaustive overview of the ongoing remote sensing technology transfer into the agricultural sector. It consists of 10 high-quality papers focusing on a wide range of remote sensing models and techniques to forecast crop production and yield, to map agricultural landscape and to evaluate plant and soil biophysical features. Satellite, RPAS, and SAR data were involved. This preface describes shortly each contribution published in such Special Issue

    Disaster Site Structure Analysis: Examining Effective Remote Sensing Techniques in Blue Tarpaulin Inspection

    Get PDF
    This thesis aimed to evaluate three methods of analyzing blue roofing tarpaulin (tarp) placed on homes in post natural disaster zones with remote sensing techniques by assessing the different methods- image segmentation, machine learning (ML), and supervised classification. One can determine which is the most efficient and accurate way of detecting blue tarps. The concept here was that using the most efficient and accurate way to locate blue tarps can aid federal, state, and local emergency management (EM) operations and homeowners. In the wake of a natural disaster such as a tornado, hurricane, thunderstorm, or similar weather events, roofs are the most likely to be damaged (Esri Events., 2019). Severe roof damage needs to be mitigated as fast as possible: which in the United States is often done at no cost by the Federal Emergency Management Agency (FEMA). This research aimed to find the most efficient and accurate way of detecting blue tarps with three different remote sensing practices. The first method, image segmentation, separates parts of a whole image into smaller areas or categories that correspond to distinct items or parts of objects. Each pixel in a remotely sensed image is then classified into categories set by the user. A successful segmentation will result when pixels in the same category have comparable multivariate, grayscale values and form a linked area, whereas nearby pixels in other categories have distinct values. Machine Learning, ML, a second method, is a technique that processes data depending on many layers for feature v identification and pattern recognition. ArcGIS Pro mapping software processes data with ML classification methods to classify remote sensing imagery. Deep learning models may be used to recognize objects, classify images, and in this example, classify pixels. The resultant model definition file or deep learning software package is used to run the inference geoprocessing tools to extract particular item positions, categorize or label the objects, or classify the pixels in the picture. Finally, supervised classification is based on a system in which a user picks sample pixel in an image that are indicative of certain classes and then tells image-processing software to categorize the other pixels in the picture using these training sites as references. To group pixels together, the user also specifies the limits for how similar they must be. The number of classifications into which the image is categorized is likewise determined by the user. The importance of tracking blue roofs is multifaceted. Structures with roof damage from natural disasters face many immediate dangers, such as further water and wind damage. These communities are at a critical moment as responding to the damage efficiently and effectively should occur in the immediate aftermath of a disaster. In part due to strategies such as FEMA and the United States Army Corps of Engineers’ (USACE) Operation Blue Roof, most often blue tarpaulins are installed on structures to prevent further damage caused by wind and rain. From a Unmanned Arial Vehicles (UAV) perspective, these blue tarps stand out amid the downed trees, devastated infrastructure, and other debris that will populate the area. Understanding that recovery can be one of the most important stages of Emergency Management, testing techniques vi for speed, accuracy, and effectiveness will assist in creating more effective Emergency Management (EM) specialists

    Evaluating how lodging affects maize yield estimation based on UAV observations

    Get PDF
    Timely and accurate pre-harvest estimates of maize yield are vital for agricultural management. Although many remote sensing approaches have been developed to estimate maize yields, few have been tested under lodging conditions. Thus, the feasibility of existing approaches under lodging conditions and the influence of lodging on maize yield estimates both remain unclear. To address this situation, this study develops a lodging index to quantify the degree of lodging. The index is based on RGB and multispectral images obtained from a low-altitude unmanned aerial vehicle and proves to be an important predictor variable in a random forest regression (RFR) model for accurately estimating maize yield after lodging. The results show that (1) the lodging index accurately describes the degree of lodging of each maize plot, (2) the yield-estimation model that incorporates the lodging index provides slightly more accurate yield estimates than without the lodging index at three important growth stages of maize (tasseling, milking, denting), and (3) the RFR model with lodging index applied at the denting (R5) stage yields the best performance of the three growth stages, with R2 = 0.859, a root mean square error (RMSE) of 1086.412 kg/ha, and a relative RMSE of 13.1%. This study thus provides valuable insight into the precise estimation of crop yield and demonstra\tes that incorporating a lodging stress-related variable into the model leads to accurate and robust estimates of crop grain yield

    Image-based Microplot Segmentation/Detection and Deep Learning in Plant Breeding Experiments

    Get PDF
    In the coming years, the agricultural sector will encounter significant challenges from population growth, climate change, and evolving consumer demands. To address these challenges, farmers and plant breeders actively develop advanced plant varieties with enhanced productivity and resilience to harsh environmental conditions. However, the current methods for evaluating plant traits, such as manual operations and visual assessment by breeders, are time-consuming and subjective. A promising solution to this issue is image-based phenotyping, which leverages image-processing and machine-learning techniques to facilitate rapid and objective monitoring of numerous plants, enabling breeders to make more informed decisions. In order to perform per-microplot phenotypic analysis from the imagery and extract phenotypic traits from the field, it is necessary to identify and segment individual microplots (a small subdivided area within a field) in the orthomosaics. Nonetheless, the current procedures for segmenting and identifying microplots within aerial imagery used in agricultural field experiments necessitate manual operations, resulting in considerable time and labour investments. By automating this process, the evaluation of microplot phenotypes, such as physical traits, can be expedited, facilitating automated monitoring and quantification of plant characteristics. Our objective is to develop novel phenotyping algorithms to segment, detect, and classify microplots using image-processing and machine-learning techniques to achieve the goal. The thesis comprises four projects such as a comprehensive review of vegetation and microplot segmentation methods, the development of algorithms for the detection of both rectangular and non-rectangular microplots, and the utilization of deep learning techniques to predict lodging on microplots and highlighting the impact of deep learning on microplot phenotyping. These innovative approaches possess broad applicability in remote sensing field trials, encompassing diverse applications such as weed detection, crop row identification, plant recognition, height estimation, yield prediction, and lodging detection. Moreover, our proposed methods hold great potential for streamlining microplot phenotyping efforts by reducing the need for labour-intensive manual procedures

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G
    • …
    corecore