518 research outputs found

    Crop type recognition based on Hidden Markov Models of plant phenology

    Get PDF
    Abstract This work introduces a Hidden Markov Model (HMM) base

    Using Hidden Markov Models for Land Surface Phenology: An Evaluation Across a Range of Land Cover Types in Southeast Spain

    Get PDF
    Land Surface Phenology (LSP) metrics are increasingly being used as indicators of climate change impacts in ecosystems. For this purpose, it is necessary to use methods that can be applied to large areas with different types of vegetation, including vulnerable semiarid ecosystems that exhibit high spatial variability and low signal-to-noise ratio in seasonality. In this work, we evaluated the use of hidden Markov models (HMM) to extract phenological parameters from Moderate Resolution Imaging Spectroradiometer (MODIS) derived Normalized Difference Vegetation Index (NDVI). We analyzed NDVI time-series data for the period 2000–2018 across a range of land cover types in Southeast Spain, including rice croplands, shrublands, mixed pine forests, and semiarid steppes. Start of Season (SOS) and End of Season (EOS) metrics derived from HMM were compared with those obtained using well-established smoothing methods. When a clear and consistent seasonal variation was present, as was the case in the rice croplands, and when adjusting average curves, the smoothing methods performed as well as expected, with HMM providing consistent results. When spatial variability was high and seasonality was less clearly defined, as in the semiarid shrublands and steppe, the performance of the smoothing methods degraded. In these cases, the results from HMM were also less consistent, yet they were able to provide pixel-wise estimations of the metrics even when comparison methods did not.This research was funded by Ministerio de Economía y Competitividad grant numbers CGL2017-89804-R, CGL2014-59074-R, and CGL2015-69773-C2-1-P

    AICropCAM: Deploying classification, segmentation, detection, and counting deep-learning models for crop monitoring on the edge

    Get PDF
    Precision Agriculture (PA) promises to meet the future demands for food, feed, fiber, and fuel while keeping their production sustainable and environmentally friendly. PA relies heavily on sensing technologies to inform site-specific decision supports for planting, irrigation, fertilization, spraying, and harvesting. Traditional point-based sensors enjoy small data sizes but are limited in their capacity to measure plant and canopy parameters. On the other hand, imaging sensors can be powerful in measuring a wide range of these parameters, especially when coupled with Artificial Intelligence. The challenge, however, is the lack of computing, electric power, and connectivity infrastructure in agricultural fields, preventing the full utilization of imaging sensors. This paper reported AICropCAM, a field-deployable imaging framework that integrated edge image processing, Internet of Things (IoT), and LoRaWAN for low-power, long-range communication. The core component of AICropCAM is a stack of four Deep Convolutional Neural Networks (DCNN) models running sequentially: CropClassiNet for crop type classification, CanopySegNet for canopy cover quantification, PlantCountNet for plant and weed counting, and InsectNet for insect identification. These DCNN models were trained and tested with \u3e43,000 field crop images collected offline. AICropCAM was embodied on a distributed wireless sensor network with its sensor node consisting of an RGB camera for image acquisition, a Raspberry Pi 4B single-board computer for edge image processing, and an Arduino MKR1310 for LoRa communication and power management. Our testing showed that the time to run the DCNN models ranged from 0.20 s for InsectNet to 20.20 s for CanopySegNet, and power consumption ranged from 3.68 W for InsectNet to 5.83 W for CanopySegNet. The classification model CropClassiNet reported 94.5 % accuracy, and the segmentation model CanopySegNet reported 92.83 % accuracy. The two object detection models PlantCountNet and InsectNet reported mean average precision of 0.69 and 0.02 for the test images. Predictions from the DCNN models were transmitted to the ThingSpeak IoT platform for visualization and analytics. We concluded that AICropCAM successfully implemented image processing on the edge, drastically reduced the amount of data being transmitted, and could satisfy the real-time need for decision-making in PA. AICropCAM can be deployed on moving platforms such as center pivots or drones to increase its spatial coverage and resolution to support crop monitoring and field operations

    Improvement in Land Cover and Crop Classification based on Temporal Features Learning from Sentinel-2 Data Using Recurrent-Convolutional Neural Network (R-CNN)

    Get PDF
    Understanding the use of current land cover, along with monitoring change over time, is vital for agronomists and agricultural agencies responsible for land management. The increasing spatial and temporal resolution of globally available satellite images, such as provided by Sentinel-2, creates new possibilities for researchers to use freely available multi-spectral optical images, with decametric spatial resolution and more frequent revisits for remote sensing applications such as land cover and crop classification (LC&CC), agricultural monitoring and management, environment monitoring. Existing solutions dedicated to cropland mapping can be categorized based on per-pixel based and object-based. However, it is still challenging when more classes of agricultural crops are considered at a massive scale. In this paper, a novel and optimal deep learning model for pixel-based LC&CC is developed and implemented based on Recurrent Neural Networks (RNN) in combination with Convolutional Neural Networks (CNN) using multi-temporal sentinel-2 imagery of central north part of Italy, which has diverse agricultural system dominated by economic crop types. The proposed methodology is capable of automated feature extraction by learning time correlation of multiple images, which reduces manual feature engineering and modeling crop phenological stages. Fifteen classes, including major agricultural crops, were considered in this study. We also tested other widely used traditional machine learning algorithms for comparison such as support vector machine SVM, random forest (RF), Kernal SVM, and gradient boosting machine, also called XGBoost. The overall accuracy achieved by our proposed Pixel R-CNN was 96.5%, which showed considerable improvements in comparison with existing mainstream methods. This study showed that Pixel R-CNN based model offers a highly accurate way to assess and employ time-series data for multi-temporal classification tasks

    Analysis of Irrigation Decision Behavior and Forecasting Future Irrigation Decisions

    Get PDF
    Farmers play a pivotal role in food production. To be economically successful, farmers must make many decisions during the course of a growing season about the allocation of inputs to production. For farmers in arid regions, one of these decisions is whether to irrigate. This research is the first of its kind to investigate the reasons that drive a farmer to make irrigation decisions and use those reasons/factors to forecast future irrigation decisions. This study can help water managers and canal operators to estimate short-term irrigation demands, thereby gaining information that might be useful in management of irrigation supply systems. This work presents three approaches to study farmer irrigation behavior: Bayesian belief networks (BBNs), decision trees, and hidden Markov models (HMMs). All three models are in the class of evolutionary algorithms, which are often used to analyze problems in dynamic and uncertain environments. These algorithms learn the connections between observed input and output data and can make predictions about future events. The models were used to study behavior of farmers in the Canal B command area, located in the Lower Sevier River Basin, Delta, Utah. Alfalfa, barley, and corn are the major crops in this area. Biophysical variables that are measured during the growing reasons were used as inputs to build the models. Information about crop phenology, soil moisture, and weather variables were compiled. Information about timing of irrigation events was available from soil moisture probes installed on some agricultural fields at the site. The models were capable of identifying the variables that are important in forecasting an irrigation decision, classes of farmers, and decisions with single and multi-factor effect regarding farmer behavior. The models did this across years and crops. The advantage of using these models to study a complex problem like behavior is that they do not require exact information, which can never be completely obtained, given the complexity of the problem. This study uses biophysical inputs to forecast decisions about water use. Such forecasts cannot be done satisfactorily using survey methodologies. The study reveals irrigation behavior characteristics. These conform to previous beliefs that a farmer might look at crop conditions, consult a neighbor, or irrigate on a weekend if he has a job during the week. When presented with new data, these models gave good estimates for probable days of irrigation, given the past behavior. All three models can be adequately used to explore farmers\u27 irrigation behavior for a given site. They are capable of answering questions related to the driving forces of irrigation decisions and the classes of subjects involved in a complex process

    Crop Classification Under Varying Cloud Cover With Neural Ordinary Differential Equations

    Full text link
    Optical satellite sensors cannot see the earth’s surface through clouds. Despite the periodic revisit cycle, image sequences acquired by earth observation satellites are, therefore, irregularly sampled in time. State-of-the-art methods for crop classification (and other time-series analysis tasks) rely on techniques that implicitly assume regular temporal spacing between observations, such as recurrent neural networks (RNNs). We propose to use neural ordinary differential equations (NODEs) in combination with RNNs to classify crop types in irregularly spaced image sequences. The resulting ODE-RNN models consist of two steps: an update step, where a recurrent unit assimilates new input data into the model’s hidden state, and a prediction step, in which NODE propagates the hidden state until the next observation arrives. The prediction step is based on a continuous representation of the latent dynamics, which has several advantages. At the conceptual level, it is a more natural way to describe the mechanisms that govern the phenological cycle. From a practical point of view, it makes it possible to sample the system state at arbitrary points in time such that one can integrate observations whenever they are available and extrapolate beyond the last observation. Our experiments show that ODE-RNN, indeed, improves classification accuracy over common baselines, such as LSTM, GRU, temporal convolutional network, and transformer. The gains are most prominent in the challenging scenario where only few observations are available (i.e., frequent cloud cover). Moreover, we show that the ability to extrapolate translates to better classification performance early in the season, which is important for forecasting

    Context-self contrastive pretraining for crop type semantic segmentation

    Full text link
    In this paper, we propose a fully supervised pre-training scheme based on contrastive learning particularly tailored to dense classification tasks. The proposed Context-Self Contrastive Loss (CSCL) learns an embedding space that makes semantic boundaries pop-up by use of a similarity metric between every location in a training sample and its local context. For crop type semantic segmentation from Satellite Image Time Series (SITS) we find performance at parcel boundaries to be a critical bottleneck and explain how CSCL tackles the underlying cause of that problem, improving the state-of-the-art performance in this task. Additionally, using images from the Sentinel-2 (S2) satellite missions we compile the largest, to our knowledge, SITS dataset densely annotated by crop type and parcel identities, which we make publicly available together with the data generation pipeline. Using that data we find CSCL, even with minimal pre-training, to improve all respective baselines and present a process for semantic segmentation at super-resolution for obtaining crop classes at a more granular level. The code and instructions to download the data can be found in https://github.com/michaeltrs/DeepSatModels.Comment: 15 pages, 17 figure

    Advancements in Multi-temporal Remote Sensing Data Analysis Techniques for Precision Agriculture

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore