19 research outputs found

    Rapeseed Seedling Stand Counting and Seeding Performance Evaluation at Two Early Growth Stages Based on Unmanned Aerial Vehicle Imagery

    Get PDF
    The development of unmanned aerial vehicles (UAVs) and image processing algorithms for field-based phenotyping offers a non-invasive and effective technology to obtain plant growth traits such as canopy cover and plant height in fields. Crop seedling stand count in early growth stages is important not only for determining plant emergence, but also for planning other related agronomic practices. The main objective of this research was to develop practical and rapid remote sensing methods for early growth stage stand counting to evaluate mechanically seeded rapeseed (Brassica napus L.) seedlings. Rapeseed was seeded in a field by three different seeding devices. A digital single-lens reflex camera was installed on an UAV platform to capture ultrahigh resolution RGB images at two growth stages when most rapeseed plants had at least two leaves. Rapeseed plant objects were segmented from images of vegetation indices using typical Otsu thresholding method. After segmentation, shape features such as area, length-width ratio and elliptic fit were extracted from the segmented rapeseed plant objects to establish regression models of seedling stand count. Three row characteristics (the coefficient of variation of row spacing uniformity, the error rate of the row spacing and the coefficient of variation of seedling uniformity) were further calculated for seeding performance evaluation after crop row detection. Results demonstrated that shape features had strong correlations with ground-measured seedling stand count. The regression models achieved R-squared values of 0.845 and 0.867, respectively, for the two growth stages. The mean absolute errors of total stand count were 9.79 and 5.11% for the two respective stages. A single model over these two stages had an R-squared value of 0.846, and the total number of rapeseed plants was also accurately estimated with an average relative error of 6.83%.Moreover, the calculated row characteristics were demonstrated to be useful in recognizing areas of failed germination possibly resulted from skipped or ineffective planting. In summary, this study developed practical UAV-based remote sensing methods and demonstrated the feasibility of using the methods for rapeseed seedling stand counting and mechanical seeding performance evaluation at early growth stages

    Rapeseed Seedling Stand Counting and Seeding Performance Evaluation at Two Early Growth Stages Based on Unmanned Aerial Vehicle Imagery

    Get PDF
    The development of unmanned aerial vehicles (UAVs) and image processing algorithms for field-based phenotyping offers a non-invasive and effective technology to obtain plant growth traits such as canopy cover and plant height in fields. Crop seedling stand count in early growth stages is important not only for determining plant emergence, but also for planning other related agronomic practices. The main objective of this research was to develop practical and rapid remote sensing methods for early growth stage stand counting to evaluate mechanically seeded rapeseed (Brassica napus L.) seedlings. Rapeseed was seeded in a field by three different seeding devices. A digital single-lens reflex camera was installed on an UAV platform to capture ultrahigh resolution RGB images at two growth stages when most rapeseed plants had at least two leaves. Rapeseed plant objects were segmented from images of vegetation indices using typical Otsu thresholding method. After segmentation, shape features such as area, length-width ratio and elliptic fit were extracted from the segmented rapeseed plant objects to establish regression models of seedling stand count. Three row characteristics (the coefficient of variation of row spacing uniformity, the error rate of the row spacing and the coefficient of variation of seedling uniformity) were further calculated for seeding performance evaluation after crop row detection. Results demonstrated that shape features had strong correlations with ground-measured seedling stand count. The regression models achieved R-squared values of 0.845 and 0.867, respectively, for the two growth stages. The mean absolute errors of total stand count were 9.79 and 5.11% for the two respective stages. A single model over these two stages had an R-squared value of 0.846, and the total number of rapeseed plants was also accurately estimated with an average relative error of 6.83%. Moreover, the calculated row characteristics were demonstrated to be useful in recognizing areas of failed germination possibly resulted from skipped or ineffective planting. In summary, this study developed practical UAV-based remote sensing methods and demonstrated the feasibility of using the methods for rapeseed seedling stand counting and mechanical seeding performance evaluation at early growth stages

    Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

    Get PDF
    Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation

    Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

    Get PDF
    Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation

    Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

    Get PDF
    Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation

    Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

    Get PDF
    Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation

    Rapeseed Seedling Stand Counting and Seeding Performance Evaluation at Two Early Growth Stages Based on Unmanned Aerial Vehicle Imagery

    Get PDF
    The development of unmanned aerial vehicles (UAVs) and image processing algorithms for field-based phenotyping offers a non-invasive and effective technology to obtain plant growth traits such as canopy cover and plant height in fields. Crop seedling stand count in early growth stages is important not only for determining plant emergence, but also for planning other related agronomic practices. The main objective of this research was to develop practical and rapid remote sensing methods for early growth stage stand counting to evaluate mechanically seeded rapeseed (Brassica napus L.) seedlings. Rapeseed was seeded in a field by three different seeding devices. A digital single-lens reflex camera was installed on an UAV platform to capture ultrahigh resolution RGB images at two growth stages when most rapeseed plants had at least two leaves. Rapeseed plant objects were segmented from images of vegetation indices using typical Otsu thresholding method. After segmentation, shape features such as area, length-width ratio and elliptic fit were extracted from the segmented rapeseed plant objects to establish regression models of seedling stand count. Three row characteristics (the coefficient of variation of row spacing uniformity, the error rate of the row spacing and the coefficient of variation of seedling uniformity) were further calculated for seeding performance evaluation after crop row detection. Results demonstrated that shape features had strong correlations with ground-measured seedling stand count. The regression models achieved R-squared values of 0.845 and 0.867, respectively, for the two growth stages. The mean absolute errors of total stand count were 9.79 and 5.11% for the two respective stages. A single model over these two stages had an R-squared value of 0.846, and the total number of rapeseed plants was also accurately estimated with an average relative error of 6.83%.Moreover, the calculated row characteristics were demonstrated to be useful in recognizing areas of failed germination possibly resulted from skipped or ineffective planting. In summary, this study developed practical UAV-based remote sensing methods and demonstrated the feasibility of using the methods for rapeseed seedling stand counting and mechanical seeding performance evaluation at early growth stages

    Combining UAV-RGB high-throughput field phenotyping and genome-wide association study to reveal genetic variation of rice germplasms in dynamic response to drought stress

    Get PDF
    Accurate and high-throughput phenotyping of the dynamic response of a large rice population to drought stress in the field is a bottleneck for genetic dissection and breeding of drought resistance. Here, high-efficiency and high-frequent image acquisition by an unmanned aerial vehicle (UAV) was utilized to quantify the dynamic drought response of a rice population under field conditions. Deep convolutional neural networks (DCNNs) and canopy height models were applied to extract highly correlated phenotypic traits including UAV-based leaf-rolling score (LRS_uav), plant water content (PWC_uav) and a new composite trait, drought resistance index by UAV (DRI_uav). The DCNNs achieved high accuracy (correlation coefficient R = 0.84 for modeling set and R = 0.86 for test set) to replace manual leaf-rolling rating. PWC_uav values were precisely estimated (correlation coefficient R = 0.88) and DRI_uav was modeled to monitor the drought resistance of rice accessions dynamically and comprehensively. A total of 111 significantly associated loci were detected by genome-wide association study for the three dynamic traits, and 30.6% of them were not detected in previous mapping studies using nondynamic drought response traits. Unmanned aerial vehicle and deep learning are confirmed effective phenotyping techniques for more complete genetic dissection of rice dynamic responses to drought and exploration of valuable alleles for drought resistance improvement

    Automatic Wheat Lodging Detection and Mapping in Aerial Imagery to Support High-Throughput Phenotyping and In-Season Crop Management

    Get PDF
    Latest advances in unmanned aerial vehicle (UAV) technology and convolutional neural networks (CNNs) allow us to detect crop lodging in a more precise and accurate way. However, the performance and generalization of a model capable of detecting lodging when the plants may show different spectral and morphological signatures have not been investigated much. This study investigated and compared the performance of models trained using aerial imagery collected at two growth stages of winter wheat with different canopy phenotypes. Specifically, three CNN-based models were trained with aerial imagery collected at early grain filling stage only, at physiological maturity only, and at both stages. Results show that the multi-stage model trained by images from both growth stages outperformed the models trained by images from individual growth stages on all testing data. The mean accuracy of the multi-stage model was 89.23% for both growth stages, while the mean of the other two models were 52.32% and 84.9%, respectively. This study demonstrates the importance of diversity of training data in big data analytics, and the feasibility of developing a universal decision support system for wheat lodging detection and mapping multi-growth stages with high-resolution remote sensing imagery

    Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

    Get PDF
    Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation
    corecore