217 research outputs found

    Utility of Daily 3 m Planet Fusion Surface Reflectance Data for Tillage Practice Mapping with Deep Learning

    Get PDF
    Tillage practices alter soil surface structure that can be potentially captured by satellite images with both high spatial and temporal resolution. This study explored tillage practice mapping using the daily Planet Fusion surface reflectance (PF-SR) gap-free 3 m data generated by fusing PlanetScope with Landsat-8, Sentinel-2 and MODIS surface reflectance data. The study area is a 220 Ă— 220 km2 agricultural area in South Dakota, USA, and the study used 3285 PF-SR images from September 1, 2020 to August 31, 2021. The PF-SR images for the surveyed 433 fields were sliced into 10,747 training (70%) and evaluation (30%) non-overlapping time series patches. The training and evaluation patches were from different fields for evaluation data independence. The performance of four deep learning models (i.e., 2D convolutional neural networks (CNN), 3D CNN, CNN-Long short-term memory (LSTM), and attention CNN-LSTM) in tillage practice mapping, as well as their sensitivity to different spatial (i.e., 3 m, 24 m, and 96 m) and temporal resolutions (16-day, 8-day, 4-day, 2-day and 1-day) were examined. Classification accuracy continuously increased with increases in both temporal and spatial resolutions. The optimal models (3D CNN and attention CNN-LSTM) achieved ~77% accuracy using 2-day or daily 3 m resolution data as opposed to ~72% accuracy using 16-day 3 m resolution data or daily 24 m resolution data. This study also analyzed the feature importance of different acquisition dates for the two optimal models. The 3D CNN model feature importances were found to agree well with the tillage practice time. High feature importance was associated with observations during the fall and spring tillage period (i.e., fresh tillage signals) whereas the crop peak growing period (i.e., tillage signals weathered and confounded by dense canopy) was characterized by a relatively low feature importance. The work provides valuable insights into the utility of deep learning for tillage mapping and change event time identification based on high resolution imagery

    Mid to Late Season Weed Detection in Soybean Production Fields Using Unmanned Aerial Vehicle and Machine Learning

    Get PDF
    Mid-late season weeds are those that escape the early season herbicide applications and those that emerge late in the season. They might not affect the crop yield, but if uncontrolled, will produce a large number of seeds causing problems in the subsequent years. In this study, high-resolution aerial imagery of mid-season weeds in soybean fields was captured using an unmanned aerial vehicle (UAV) and the performance of two different automated weed detection approaches – patch-based classification and object detection was studied for site-specific weed management. For the patch-based classification approach, several conventional machine learning models on Haralick texture features were compared with the Mobilenet v2 based convolutional neural network (CNN) model for their classification performance. The results showed that the CNN model had the best classification performance for individual patches. Two different image slicing approaches – patches with and without overlap were tested, and it was found that slicing with overlap leads to improved weed detection but with higher inference time. For the object detection approach, two models with different network architectures, namely Faster RCNN and SSD were evaluated and compared. It was found that Faster RCNN had better overall weed detection performance than the SSD with similar inference time. Also, it was found that Faster RCNN had better detection performance and shorter inference time compared to the patch-based CNN with overlapping image slicing. The influence of spatial resolution on weed detection accuracy was investigated by simulating the UAV imagery captured at different altitudes. It was found that Faster RCNN achieves similar performance at a lower spatial resolution. The inference time of Faster RCNN was evaluated using a regular laptop. The results showed the potential of on-farm near real-time weed detection in soybean production fields by capturing UAV imagery with lesser overlap and processing them with a pre-trained deep learning model, such as Faster RCNN, in regular laptops and mobile devices. Advisor: Yeyin Sh

    Mid to Late Season Weed Detection in Soybean Production Fields Using Unmanned Aerial Vehicle and Machine Learning

    Get PDF
    Mid-late season weeds are those that escape the early season herbicide applications and those that emerge late in the season. They might not affect the crop yield, but if uncontrolled, will produce a large number of seeds causing problems in the subsequent years. In this study, high-resolution aerial imagery of mid-season weeds in soybean fields was captured using an unmanned aerial vehicle (UAV) and the performance of two different automated weed detection approaches – patch-based classification and object detection was studied for site-specific weed management. For the patch-based classification approach, several conventional machine learning models on Haralick texture features were compared with the Mobilenet v2 based convolutional neural network (CNN) model for their classification performance. The results showed that the CNN model had the best classification performance for individual patches. Two different image slicing approaches – patches with and without overlap were tested, and it was found that slicing with overlap leads to improved weed detection but with higher inference time. For the object detection approach, two models with different network architectures, namely Faster RCNN and SSD were evaluated and compared. It was found that Faster RCNN had better overall weed detection performance than the SSD with similar inference time. Also, it was found that Faster RCNN had better detection performance and shorter inference time compared to the patch-based CNN with overlapping image slicing. The influence of spatial resolution on weed detection accuracy was investigated by simulating the UAV imagery captured at different altitudes. It was found that Faster RCNN achieves similar performance at a lower spatial resolution. The inference time of Faster RCNN was evaluated using a regular laptop. The results showed the potential of on-farm near real-time weed detection in soybean production fields by capturing UAV imagery with lesser overlap and processing them with a pre-trained deep learning model, such as Faster RCNN, in regular laptops and mobile devices. Advisor: Yeyin Sh

    Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring

    Get PDF
    Agriculture is facing severe challenges from crop stresses, threatening its sustainable development and food security. This work exploits aerial visual perception for yellow rust disease monitoring, which seamlessly integrates state-of-the-art techniques and algorithms including UAV sensing, multispectral imaging, vegetation segmentation and deep learning U-Net. A field experiment is designed by infecting winter wheat with yellow rust inoculum, on top of which multispectral aerial images are captured by DJI Matrice 100 equipped with RedEdge camera. After image calibration and stitching, multispectral orthomosaic is labelled for system evaluation by inspecting high-resolution RGB images taken by Parrot Anafi Drone. The merits of the developed framework drawing spectral-spatial information concurrently are demonstrated by showing improved performance over purely spectral based classifier by the classical random forest algorithm. Moreover, various network input band combinations are tested including three RGB bands and five selected spectral vegetation indices by Sequential Forward Selection strategy of Wrapper algorithm

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    A Neural Network Method for Classification of Sunlit and Shaded Components of Wheat Canopies in the Field Using High-Resolution Hyperspectral Imagery

    Get PDF
    (1) Background: Information rich hyperspectral sensing, together with robust image analysis, is providing new research pathways in plant phenotyping. This combination facilitates the acquisition of spectral signatures of individual plant organs as well as providing detailed information about the physiological status of plants. Despite the advances in hyperspectral technology in field-based plant phenotyping, little is known about the characteristic spectral signatures of shaded and sunlit components in wheat canopies. Non-imaging hyperspectral sensors cannot provide spatial information; thus, they are not able to distinguish the spectral reflectance differences between canopy components. On the other hand, the rapid development of high-resolution imaging spectroscopy sensors opens new opportunities to investigate the reflectance spectra of individual plant organs which lead to the understanding of canopy biophysical and chemical characteristics. (2) Method: This study reports the development of a computer vision pipeline to analyze ground-acquired imaging spectrometry with high spatial and spectral resolutions for plant phenotyping. The work focuses on the critical steps in the image analysis pipeline from pre-processing to the classification of hyperspectral images. In this paper, two convolutional neural networks (CNN) are employed to automatically map wheat canopy components in shaded and sunlit regions and to determine their specific spectral signatures. The first method uses pixel vectors of the full spectral features as inputs to the CNN model and the second method integrates the dimension reduction technique known as linear discriminate analysis (LDA) along with the CNN to increase the feature discrimination and improves computational efficiency. (3) Results: The proposed technique alleviates the limitations and lack of separability inherent in existing pre-defined hyperspectral classification methods. It optimizes the use of hyperspectral imaging and ensures that the data provide information about the spectral characteristics of the targeted plant organs, rather than the background. We demonstrated that high-resolution hyperspectral imagery along with the proposed CNN model can be powerful tools for characterizing sunlit and shaded components of wheat canopies in the field. The presented method will provide significant advances in the determination and relevance of spectral properties of shaded and sunlit canopy components under natural light conditions

    Methods for Detecting and Classifying Weeds, Diseases and Fruits Using AI to Improve the Sustainability of Agricultural Crops: A Review

    Get PDF
    The rapid growth of the world’s population has put significant pressure on agriculture to meet the increasing demand for food. In this context, agriculture faces multiple challenges, one of which is weed management. While herbicides have traditionally been used to control weed growth, their excessive and random use can lead to environmental pollution and herbicide resistance. To address these challenges, in the agricultural industry, deep learning models have become a possible tool for decision-making by using massive amounts of information collected from smart farm sensors. However, agriculture’s varied environments pose a challenge to testing and adopting new technology effectively. This study reviews recent advances in deep learning models and methods for detecting and classifying weeds to improve the sustainability of agricultural crops. The study compares performance metrics such as recall, accuracy, F1-Score, and precision, and highlights the adoption of novel techniques, such as attention mechanisms, single-stage detection models, and new lightweight models, which can enhance the model’s performance. The use of deep learning methods in weed detection and classification has shown great potential in improving crop yields and reducing adverse environmental impacts of agriculture. The reduction in herbicide use can prevent pollution of water, food, land, and the ecosystem and avoid the resistance of weeds to chemicals. This can help mitigate and adapt to climate change by minimizing agriculture’s environmental impact and improving the sustainability of the agricultural sector. In addition to discussing recent advances, this study also highlights the challenges faced in adopting new technology in agriculture and proposes novel techniques to enhance the performance of deep learning models. The study provides valuable insights into the latest advances and challenges in process systems engineering and technology for agricultural activities

    Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture 5.0

    Get PDF
    This work was supported by the projects: "VIRTUOUS" funded by the European Union's Horizon 2020 Project H2020-MSCA-RISE-2019. Ref. 872181, "SUSTAINABLE" funded by the European Union's Horizon 2020 Project H2020-MSCA-RISE-2020. Ref. 101007702 and the "Project of Excellence" from Junta de Andalucia 2020. Ref. P18-H0-4700. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Timely and reliable information about crop management, production, and yield is considered of great utility by stakeholders (e.g., national and international authorities, farmers, commercial units, etc.) to ensure food safety and security. By 2050, according to Food and Agriculture Organization (FAO) estimates, around 70% more production of agricultural products will be needed to fulfil the demands of the world population. Likewise, to meet the Sustainable Development Goals (SDGs), especially the second goal of “zero hunger”, potential technologies like remote sensing (RS) need to be efficiently integrated into agriculture. The application of RS is indispensable today for a highly productive and sustainable agriculture. Therefore, the present study draws a general overview of RS technology with a special focus on the principal platforms of this technology, i.e., satellites and remotely piloted aircrafts (RPAs), and the sensors used, in relation to the 5th industrial revolution. Nevertheless, since 1957, RS technology has found applications, through the use of satellite imagery, in agriculture, which was later enriched by the incorporation of remotely piloted aircrafts (RPAs), which is further pushing the boundaries of proficiency through the upgrading of sensors capable of higher spectral, spatial, and temporal resolutions. More prominently, wireless sensor technologies (WST) have streamlined real time information acquisition and programming for respective measures. Improved algorithms and sensors can, not only add significant value to crop data acquisition, but can also devise simulations on yield, harvesting and irrigation periods, metrological data, etc., by making use of cloud computing. The RS technology generates huge sets of data that necessitate the incorporation of artificial intelligence (AI) and big data to extract useful products, thereby augmenting the adeptness and efficiency of agriculture to ensure its sustainability. These technologies have made the orientation of current research towards the estimation of plant physiological traits rather than the structural parameters possible. Futuristic approaches for benefiting from these cutting-edge technologies are discussed in this study. This study can be helpful for researchers, academics, and young students aspiring to play a role in the achievement of sustainable agriculture.European Commission 101007702 872181Junta de Andalucia P18-H0-470

    Advancing Land Cover Mapping in Remote Sensing with Deep Learning

    Get PDF
    Automatic mapping of land cover in remote sensing data plays an increasingly significant role in several earth observation (EO) applications, such as sustainable development, autonomous agriculture, and urban planning. Due to the complexity of the real ground surface and environment, accurate classification of land cover types is facing many challenges. This thesis provides novel deep learning-based solutions to land cover mapping challenges such as how to deal with intricate objects and imbalanced classes in multi-spectral and high-spatial resolution remote sensing data. The first work presents a novel model to learn richer multi-scale and global contextual representations in very high-resolution remote sensing images, namely the dense dilated convolutions' merging (DDCM) network. The proposed method is light-weighted, flexible and extendable, so that it can be used as a simple yet effective encoder and decoder module to address different classification and semantic mapping challenges. Intensive experiments on different benchmark remote sensing datasets demonstrate that the proposed method can achieve better performance but consume much fewer computation resources compared with other published methods. Next, a novel graph model is developed for capturing long-range pixel dependencies in remote sensing images to improve land cover mapping. One key component in the method is the self-constructing graph (SCG) module that can effectively construct global context relations (latent graph structure) without requiring prior knowledge graphs. The proposed SCG-based models achieved competitive performance on different representative remote sensing datasets with faster training and lower computational cost compared to strong baseline models. The third work introduces a new framework, namely the multi-view self-constructing graph (MSCG) network, to extend the vanilla SCG model to be able to capture multi-view context representations with rotation invariance to achieve improved segmentation performance. Meanwhile, a novel adaptive class weighting loss function is developed to alleviate the issue of class imbalance commonly found in EO datasets for semantic segmentation. Experiments on benchmark data demonstrate the proposed framework is computationally efficient and robust to produce improved segmentation results for imbalanced classes. To address the key challenges in multi-modal land cover mapping of remote sensing data, namely, 'what', 'how' and 'where' to effectively fuse multi-source features and to efficiently learn optimal joint representations of different modalities, the last work presents a compact and scalable multi-modal deep learning framework (MultiModNet) based on two novel modules: the pyramid attention fusion module and the gated fusion unit. The proposed MultiModNet outperforms the strong baselines on two representative remote sensing datasets with fewer parameters and at a lower computational cost. Extensive ablation studies also validate the effectiveness and flexibility of the framework
    • …
    corecore