3,122 research outputs found

    Clarification of Water Stress in Apple Seedlings Using HSI Texture with Machine Learning Technique

    Get PDF
    Apples are known for their nutrition and economic value. Accurate and rapid diagnosis of water status in apple seedlings on an individual rootstock basis is a prerequisite for precision water management. This study presents a rapid and non-destructive approach for estimating water content in apple seedlings at leaf levels. A PIKA L system collects hyperspectral images(400-1000nm) of apple leaves. To the author's knowledge, no prior work was conducted using the spectral-texture approach in plant water stress. Our research extracts spatial information, gray-level co-occurrence matrix (GLCM), from feature wavelength images of hypercubes. Machine learning methods are applied to these spatial feature matrixs to identify apple leaves under different water stresses. In addition, differences in spectral responses were analysed using machine learning techniques for sorting apple seedlings with varying water treatments (dry, normal, and overwatering). Also, we measure chlorophyll to determine the relationship between hyperspectral characteristics and physiological changes. The achievements of the research indicate that the fusion of texture and hyperspectral imaging coupled with machine learning techniques is promising and presents a powerful potential to determine the water stress in the leaves of apple seedlings

    Highlighting Water Stress in Apple Seedlings Using HSI Texture with Machine Learning Technique

    Get PDF
    Apples are known for their nutrition and economic value. Accurate and rapid diagnosis of water status in apple seedlings on an individual rootstock basis is a prerequisite for precision water management. This study presents a rapid and non-destructive approach for estimating water content in apple seedlings at leaf levels. A PIKA L system collects hyperspectral images (400-1000nm) of apple leaves. Our research extracts spatial information, gray-level co-occurrence matrix (GLCM), from feature wavelength images of hypercubes. Machine learning methods are applied to these spatial feature matrixs to identify apple leaves under different water stresses. In addition, differences in spectral responses were analysed using machine learning techniques for sorting apple seedlings with varying water treatments (dry, normal, and overwatering). Also, we measure chlorophyll to determine the relationship between hyperspectral characteristics and physiological changes. The achievements of the research indicate that the fusion of texture and hyperspectral imaging coupled with machine learning techniques is promising and presents a powerful potential to determine the water stress in the leaves of apple seedlings

    High-throughput phenotyping of plant leaf morphological, physiological, and biochemical traits on multiple scales using optical sensing

    Get PDF
    Acquisition of plant phenotypic information facilitates plant breeding, sheds light on gene action, and can be applied to optimize the quality of agricultural and forestry products. Because leaves often show the fastest responses to external environmental stimuli, leaf phenotypic traits are indicators of plant growth, health, and stress levels. Combination of new imaging sensors, image processing, and data analytics permits measurement over the full life span of plants at high temporal resolution and at several organizational levels from organs to individual plants to field populations of plants. We review the optical sensors and associated data analytics used for measuring morphological, physiological, and biochemical traits of plant leaves on multiple scales. We summarize the characteristics, advantages and limitations of optical sensing and data-processing methods applied in various plant phenotyping scenarios. Finally, we discuss the future prospects of plant leaf phenotyping research. This review aims to help researchers choose appropriate optical sensors and data processing methods to acquire plant leaf phenotypes rapidly, accurately, and cost-effectively

    Development of a robotic platform for maize functional genomics research

    Get PDF
    The food supply requirement of a growing global population leads to an increasing demand for agricultural crops. Without enlarging the current cultivated area, the only way to satisfy the needs of increasing food demand is to improve the yield per acre. Production, fertilization, and choosing productive crops are feasible approaches. How to pick the beneficial genotypes turns out to be a genetic optimization problem, so a biological tool is needed to study the function of crop genes and for the particular purpose of identifying genes important for agronomy traits. Virus-induced gene silencing (VIGS) can be used as such a tool by knocking down gene expression of genes to test their functions. The use of VIGS and other functional genomics approaches in corn plants has increased the need for determining how to rapidly associate genes with traits. A significant amount of observation, comparison, and data analysis are required for such corn genetic studies. An autonomous maize functional genomics system with the capacity to collect data collection, measure parameters, and identify virus-plants should be developed. This research project established a system combining sensors with customized algorithms that can distinguish a viral infected plant and measure parameters of maize plants. An industrial robot arm was used to collect data in multiple views with 3D sensors. Hand-eye calibration between a 2D color camera and the robot arm was performed to transform different camera coordinates into arm-based coordinates. TCP socket-based software written in Visual C ++ was developed at both the robot arm side and the PC side to perform behavioral bidirectional real-time communication. A 3D time-of-flight (ToF) camera was used to reconstruct the corn plant model. The point clouds of corn plants, in different views, were merged into one representation through a homogeneous transform matrix. Functions of a pass-through filter and a statistical outlier removal filter were called from the Point Cloud Library to remove background and random noise. An algorithm for leaf and stem segmentation based on the morphological characteristics of corn plants was developed. A least-squares method was used to fit the skeletons of leaves for computation of parameters such as leaf length and numbers. After locating the leaf center, the arm is made ready to position the 2D camera for color imaging. Color-based segmentation was applied to pick up a rectangular interest of area on the leaf image. The algorithm computing the Gray-Level Co-occurrence Matrix (GLCM) value of the leaf image was implemented using the OPENCV library. After training, Bayes classification was used to identify the infected corn plant leaf based on GLCM value. The System User Interface is capable of generating data collection commands, 3D reconstruction, parameter table output, color image acquisition control, specific leaf-probing and infected corn leaf diagnosis. This application was developed under a Qt cross-platform environment with multithreading between tasks, making the interface user-friendly and efficient

    Image Analysis and Machine Learning in Agricultural Research

    Get PDF
    Agricultural research has been a focus for academia and industry to improve human well-being. Given the challenges in water scarcity, global warming, and increased prices of fertilizer, and fossil fuel, improving the efficiency of agricultural research has become even more critical. Data collection by humans presents several challenges including: 1) the subjectiveness and reproducibility when doing the visual evaluation, 2) safety when dealing with high toxicity chemicals or severe weather events, 3) mistakes cannot be avoided, and 4) low efficiency and speed. Image analysis and machine learning are more versatile and advantageous in evaluating different plant characteristics, and this could help with agricultural data collection. In the first chapter, information related to different types of imaging (e.g., RGB, multi/hyperspectral, and thermal imaging) was explored in detail for its advantages in different agriculture applications. The process of image analysis demonstrated how target features were extracted for analysis including shape, edge, texture, and color. After acquiring features information, machine learning can be used to automatically detect or predict features of interest such as disease severity. In the second chapter, case studies of different agricultural applications were demonstrated including: 1) leaf damage symptoms, 2) stress evaluation, 3) plant growth evaluation, 4) stand/insect counting, and 5) evaluation for produce quality. Case studies showed that the use of image analysis is often more advantageous than visual rating. Advantages of image analysis include increased objectivity, speed, and more reproducibly reliable results. In the third chapter, machine learning was explored using romaine lettuce images from RD4AG to automatically grade for bolting and compactness (two of the important parameters for lettuce quality). Although the accuracy is at 68.4 and 66.6% respectively, a much larger data base and many improvements are needed to increase the model accuracy and reliability. With the advancement in cameras, computers with high computing power, and the development of different algorithms, image analysis and machine learning have the potential to replace part of the labor and improve the current data collection procedure in agricultural research. Advisor: Gary L. Hei

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    UAVs for Vegetation Monitoring: Overview and Recent Scientific Contributions

    Get PDF
    This paper reviewed a set of twenty-one original and innovative papers included in a special issue on UAVs for vegetation monitoring, which proposed new methods and techniques applied to diverse agricultural and forestry scenarios. Three general categories were considered: (1) sensors and vegetation indices used, (2) technological goals pursued, and (3) agroforestry applications. Some investigations focused on issues related to UAV flight operations, spatial resolution requirements, and computation and data analytics, while others studied the ability of UAVs for characterizing relevant vegetation features (mainly canopy cover and crop height) or for detecting different plant/crop stressors, such as nutrient content/deficiencies, water needs, weeds, and diseases. The general goal was proposing UAV-based technological solutions for a better use of agricultural and forestry resources and more efficient production with relevant economic and environmental benefits

    High-throughput phenotyping technology for corn ears

    Get PDF
    The phenotype of any organism, or as in this case, plants, includes traits or characteristics that can be measured using a technical procedure. Phenotyping is an important activity in plant breeding, since it gives breeders an observable representation of the plant’s genetic code, which is called the genotype. The word phenotype originates from the Greek word “phainein” which means “to show” and the word “typos” which means “type”. Ideally, the development of phenotyping technologies should be in lockstep with genotyping technologies, but unfortunately it is not; currently there exists a major discrepancy between the technological sophistication of genotyping versus phenotyping, and the gap is getting wider. Whereas genotyping has become a high-throughput low-cost standardized procedure, phenotyping still comprises ample manual measurements which are time consuming, tedious, and error prone. The project as conducted here aims at alleviating this problem; To aid breeders, a method was devised that allows for high-throughput phenotyping of corn ears, based on an existing imaging arrangement that produces frontal views of the ears. This thesis describes the development of machine vision algorithms that measure overall ear parameters such as ear length, ear diameter, and cap percentage (the proportion of the ear that features kernels versus the barren area). The main image processing functions used here were segmentation, skewness correction, morphological operation and image registration. To obtain a kernel count, an “ear map” was constructed using both a morphological operation and a feature matching operation. The main challenge for the morphological operation was to accurately select only kernel rows that are frontally exposed in each single image. This issue is addressed in this project by developing an algorithm of shadow recognition. The main challenge for the feature-matching operation was to detect and match image feature points. This issue was addressed by applying the algorithms of Harris’s Conner detection and SIFT descriptor. Once the ear map is created, many other morphological kernel parameters (area, location, circumference, to name a few) can be determined. Remaining challenges in this research are pointed out, including sample choice, apparatus modification and algorithm improvement. Suggestions and recommendations for future work are also provided
    • …
    corecore