576 research outputs found
Building a Data Set over 12 Globally Distributed Sites to Support the Development of Agriculture Monitoring Applications with Sentinel-2
Developing better agricultural monitoring capabilities based on Earth Observation data is critical for strengthening food production information and market transparency. The Sentinel-2 mission has the optimal capacity for regional to global agriculture monitoring in terms of resolution (10â20 meter), revisit frequency (five days) and coverage (global). In this context, the European Space Agency launched in 2014 the âSentinelÂ2 for Agricultureâ project, which aims to prepare the exploitation of Sentinel-2 data for agriculture monitoring through the development of open source processing chains for relevant products. The project generated an unprecedented data set, made of âSentinel-2 likeâ time series and in situ data acquired in 2013 over 12 globally distributed sites. Earth Observation time series were mostly built on the SPOT4 (Take 5) data set, which was specifically designed to simulate Sentinel-2. They also included Landsat 8 and RapidEye imagery as complementary data sources. Images were pre-processed to Level 2A and the quality of the resulting time series was assessed. In situ data about cropland, crop type and biophysical variables were shared by site managers, most of them belonging to the âJoint Experiment for Crop Assessment and Monitoringâ network. This data set allowed testing and comparing across sites the methodologies that will be at the core of the future âSentinelÂ2 for Agricultureâ system.Instituto de Clima y AguaFil: Bontemps, Sophie. UniversitĂ© Catholique de Louvain. Earth and Life Institute; BĂ©lgicaFil: Arias, Marcela. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Cara, Cosmin. CS Romania S.A.; RumaniaFil: Dedieu, GĂ©rard. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Guzzonato, Eric. CS SystĂšmes dâInformation; FranciaFil: Hagolle, Olivier. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Inglada, Jordi. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Matton, Nicolas. UniversitĂ© Catholique de Louvain. Earth and Life Institute; BĂ©lgicaFil: Morin, David. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Popescu, Ramona. CS Romania S.A.; RumaniaFil: Rabaute, Thierry. CS SystĂšmes dâInformation; FranciaFil: Savinaud, Mickael. CS SystĂšmes dâInformation; FranciaFil: Sepulcre, Guadalupe. UniversitĂ© Catholique de Louvain. Earth and Life Institute; BĂ©lgicaFil: Valero, Silvia. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Ahmad, Ijaz. Pakistan Space and Upper Atmosphere Research Commission. Space Applications Research Complex. National Agriculture Information Center Directorate; PakistĂĄnFil: BĂ©guĂ©, AgnĂšs. Centre de CoopĂ©ration Internationale en Recherche Agronomique pour le DĂ©velopperment; FranciaFil: Wu, Bingfang. Chinese Academy of Sciences. Institute of Remote Sensing and Digital Earth; RepĂșblica de ChinaFil: De Abelleyra, Diego. Instituto Nacional de TecnologĂa Agropecuaria (INTA). Instituto de Clima y Agua; ArgentinaFil: Diarra, Alhousseine. UniversitĂ© Cadi Ayyad. FacultĂ© des Sciences Semlalia; MarruecosFil: Dupuy, StĂ©phane. Centre de CoopĂ©ration Internationale en Recherche Agronomique pour le DĂ©velopperment; FranciaFil: French, Andrew. United States Department of Agriculture. Agricultural Research Service. Arid Land Agricultural Research Center; ArgentinaFil: Akhtar, Ibrar ul Hassan. Pakistan Space and Upper Atmosphere Research Commission. Space Applications Research Complex. National Agriculture Information Center Directorate; PakistĂĄnFil: Kussul, Nataliia. National Academy of Sciences of Ukraine. Space Research Institute and State Space Agency of Ukraine; UcraniaFil: Lebourgeois, Valentine. Centre de CoopĂ©ration Internationale en Recherche Agronomique pour le DĂ©velopperment; FranciaFil: Le Page, Michel. UniversitĂ© Cadi Ayyad. FacultĂ© des Sciences Semlalia. Laboratoire Mixte International TREMA; Marruecos. Universite de Toulose - Le Mirail. Centre dâEtudes Spatiales de la BIOsphĂšre; FranciaFil: Newby, Terrence. Agricultural Research Council; SudĂĄfricaFil: Savin, Igor. V.V. Dokuchaev Soil Science Institute; RusiaFil: VerĂłn, Santiago RamĂłn. Instituto Nacional de TecnologĂa Agropecuaria (INTA). Instituto de Clima y Agua; ArgentinaFil: Koetz, Benjamin. European Space Agency. European Space Research Institute; ItaliaFil: Defourny, Pierre. UniversitĂ© Catholique de Louvain. Earth and Life Institute; BĂ©lgic
Deep Learning Training and Benchmarks for Earth Observation Images: Data Sets, Features, and Procedures
Deep learning methods are often used for image classification or local object segmentation. The corresponding test and validation data sets are an integral part of the learning process and also of the algorithm performance evaluation. High and particularly very high-resolution Earth observation (EO) applications based on satellite images primarily aim at the semantic labeling of land cover structures or objects as well as of temporal evolution classes. However, one of the main EO objectives is physical parameter retrievals such as temperatures, precipitation, and crop yield predictions. Therefore, we need reliably labeled data sets and tools to train the developed algorithms and to assess the performance of our deep learning paradigms. Generally, imaging sensors generate a visually understandable representation of the observed scene. However, this does not hold for many EO images, where the recorded images only depict a spectral subset of the scattered light field, thus generating an indirect signature of the imaged object. This spots the load of EO image understanding, as a new and particular challenge of Machine Learning (ML) and Artificial Intelligence (AI). This chapter reviews and analyses the new approaches of EO imaging leveraging the recent advances in physical process-based ML and AI methods and signal processing
The Canadian Cropland Dataset: A New Land Cover Dataset for Multitemporal Deep Learning Classification in Agriculture
Monitoring land cover using remote sensing is vital for studying
environmental changes and ensuring global food security through crop yield
forecasting. Specifically, multitemporal remote sensing imagery provides
relevant information about the dynamics of a scene, which has proven to lead to
better land cover classification results. Nevertheless, few studies have
benefited from high spatial and temporal resolution data due to the difficulty
of accessing reliable, fine-grained and high-quality annotated samples to
support their hypotheses. Therefore, we introduce a temporal patch-based
dataset of Canadian croplands, enriched with labels retrieved from the Canadian
Annual Crop Inventory. The dataset contains 78,536 manually verified and
curated high-resolution (10 m/pixel, 640 x 640 m) geo-referenced images from 10
crop classes collected over four crop production years (2017-2020) and five
months (June-October). Each instance contains 12 spectral bands, an RGB image,
and additional vegetation index bands. Individually, each category contains at
least 4,800 images. Moreover, as a benchmark, we provide models and source code
that allow a user to predict the crop class using a single image (ResNet,
DenseNet, EfficientNet) or a sequence of images (LRCN, 3D-CNN) from the same
location. In perspective, we expect this evolving dataset to propel the
creation of robust agro-environmental models that can accelerate the
comprehension of complex agricultural regions by providing accurate and
continuous monitoring of land cover.Comment: 24 pages, 5 figures, dataset descripto
SICKLE: A Multi-Sensor Satellite Imagery Dataset Annotated with Multiple Key Cropping Parameters
The availability of well-curated datasets has driven the success of Machine
Learning (ML) models. Despite greater access to earth observation data in
agriculture, there is a scarcity of curated and labelled datasets, which limits
the potential of its use in training ML models for remote sensing (RS) in
agriculture. To this end, we introduce a first-of-its-kind dataset called
SICKLE, which constitutes a time-series of multi-resolution imagery from 3
distinct satellites: Landsat-8, Sentinel-1 and Sentinel-2. Our dataset
constitutes multi-spectral, thermal and microwave sensors during January 2018 -
March 2021 period. We construct each temporal sequence by considering the
cropping practices followed by farmers primarily engaged in paddy cultivation
in the Cauvery Delta region of Tamil Nadu, India; and annotate the
corresponding imagery with key cropping parameters at multiple resolutions
(i.e. 3m, 10m and 30m). Our dataset comprises 2,370 season-wise samples from
388 unique plots, having an average size of 0.38 acres, for classifying 21 crop
types across 4 districts in the Delta, which amounts to approximately 209,000
satellite images. Out of the 2,370 samples, 351 paddy samples from 145 plots
are annotated with multiple crop parameters; such as the variety of paddy, its
growing season and productivity in terms of per-acre yields. Ours is also one
among the first studies that consider the growing season activities pertinent
to crop phenology (spans sowing, transplanting and harvesting dates) as
parameters of interest. We benchmark SICKLE on three tasks: crop type, crop
phenology (sowing, transplanting, harvesting), and yield predictionComment: Accepted as an oral presentation at WACV 202
Lightweight, Pre-trained Transformers for Remote Sensing Timeseries
Machine learning algorithms for parsing remote sensing data have a wide range
of societally relevant applications, but labels used to train these algorithms
can be difficult or impossible to acquire. This challenge has spurred research
into self-supervised learning for remote sensing data aiming to unlock the use
of machine learning in geographies or application domains where labelled
datasets are small. Current self-supervised learning approaches for remote
sensing data draw significant inspiration from techniques applied to natural
images. However, remote sensing data has important differences from natural
images -- for example, the temporal dimension is critical for many tasks and
data is collected from many complementary sensors. We show that designing
models and self-supervised training techniques specifically for remote sensing
data results in both smaller and more performant models. We introduce the
Pretrained Remote Sensing Transformer (Presto), a transformer-based model
pre-trained on remote sensing pixel-timeseries data. Presto excels at a wide
variety of globally distributed remote sensing tasks and outperforms much
larger models. Presto can be used for transfer learning or as a feature
extractor for simple models, enabling efficient deployment at scale
Graph Neural Networks Extract High-Resolution Cultivated Land Maps from Sentinel-2 Image Series
Maintaining farm sustainability through optimizing the agricultural
management practices helps build more planet-friendly environment. The emerging
satellite missions can acquire multi- and hyperspectral imagery which captures
more detailed spectral information concerning the scanned area, hence allows us
to benefit from subtle spectral features during the analysis process in
agricultural applications. We introduce an approach for extracting 2.5 m
cultivated land maps from 10 m Sentinel-2 multispectral image series which
benefits from a compact graph convolutional neural network. The experiments
indicate that our models not only outperform classical and deep machine
learning techniques through delivering higher-quality segmentation maps, but
also dramatically reduce the memory footprint when compared to U-Nets (almost
8k trainable parameters of our models, with up to 31M parameters of U-Nets).
Such memory frugality is pivotal in the missions which allow us to uplink a
model to the AI-powered satellite once it is in orbit, as sending large nets is
impossible due to the time constraints.Comment: 7 pages (including supplementary material), published in IEEE
Geoscience and Remote Sensing Letter
Mapping smallholder cashew plantations to inform sustainable tree crop expansion in Benin
Cashews are grown by over 3 million smallholders in more than 40 countries
worldwide as a principal source of income. As the third largest cashew producer
in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15%
of the country's national export earnings. However, a lack of information on
where and how cashew trees grow across the country hinders decision-making that
could support increased cashew production and poverty alleviation. By
leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep
learning algorithms, and large-scale ground truth datasets, we successfully
produced the first national map of cashew in Benin and characterized the
expansion of cashew plantations between 2015 and 2021. In particular, we
developed a SpatioTemporal Classification with Attention (STCA) model to map
the distribution of cashew plantations, which can fully capture texture
information from discriminative time steps during a growing season. We further
developed a Clustering Augmented Self-supervised Temporal Classification
(CASTC) model to distinguish high-density versus low-density cashew plantations
by automatic feature extraction and optimized clustering. Results show that the
STCA model has an overall accuracy over 85% and the CASTC model achieved an
overall accuracy of 76%. We found that the cashew area in Benin almost doubled
from 2015 to 2021 with 60% of new plantation development coming from cropland
or fallow land, while encroachment of cashew plantations into protected areas
has increased by 55%. Only half of cashew plantations were high-density in
2021, suggesting high potential for intensification. Our study illustrates the
power of combining high-resolution remote sensing imagery and state-of-the-art
deep learning algorithms to better understand tree crops in the heterogeneous
smallholder landscape
A systematic review of the use of Deep Learning in Satellite Imagery for Agriculture
Agricultural research is essential for increasing food production to meet the
requirements of an increasing population in the coming decades. Recently,
satellite technology has been improving rapidly and deep learning has seen much
success in generic computer vision tasks and many application areas which
presents an important opportunity to improve analysis of agricultural land.
Here we present a systematic review of 150 studies to find the current uses of
deep learning on satellite imagery for agricultural research. Although we
identify 5 categories of agricultural monitoring tasks, the majority of the
research interest is in crop segmentation and yield prediction. We found that,
when used, modern deep learning methods consistently outperformed traditional
machine learning across most tasks; the only exception was that Long Short-Term
Memory (LSTM) Recurrent Neural Networks did not consistently outperform Random
Forests (RF) for yield prediction. The reviewed studies have largely adopted
methodologies from generic computer vision, except for one major omission:
benchmark datasets are not utilised to evaluate models across studies, making
it difficult to compare results. Additionally, some studies have specifically
utilised the extra spectral resolution available in satellite imagery, but
other divergent properties of satellite images - such as the hugely different
scales of spatial patterns - are not being taken advantage of in the reviewed
studies.Comment: 25 pages, 2 figures and lots of large tables. Supplementary materials
section included here in main pd
Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation
In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications
- âŠ