13 research outputs found

    Detection and classification of changes in agriculture, forest, and shrublands for land cover map updating in Portugal

    Get PDF
    Costa, H., Benevides, P., Moreira, F. D., & Caetano, M. (2022). Detection and classification of changes in agriculture, forest, and shrublands for land cover map updating in Portugal. In C. M. U. Neale, & A. Maltese (Eds.), Proceedings of SPIE.Remote Sensing for Agriculture, Ecosystems, and Hydrology XXIV (Vol. 12262, pp. 19). SPIE. Society of Photo-Optical Instrumentation Engineers. https://doi.org/10.1117/12.2636127Portugal produced a land cover map for 2018 based on Sentinel-2 data and represents 13 classes, including agriculture, six tree forest species, and shrubland. The map was updated for 2020. The strategy focused on three strata where annual changes occur: S1 (agriculture) due to crop rotation, S2 (forest and shrubland) due to wildfires and clear-cuts, and S3 (fire scars and clear-cuts of previous years) where vegetation regeneration occurs. The methodology included i) change detection, ii) classification, and iii) knowledge-based rules. Stratum S1 was classified with images of the entire 2020 crop year and a training dataset extracted from the national Land Parcel Identification Systems (LPIS) of 2020. The land cover nomenclature was expanded and class agriculture was split in three distinct classes, hence resulting a map with 15 classes in total. Change detection, implemented in stratum S2, analyzed the profile of NDVI since 2018 to find potential loss of vegetation. S2 and S3 were classified through two stages. First, images of the entire 2020 crop year were used and then data of October 2020 (end of crop year) to capture late changes. The training points of the 2018 land cover map were used, but only if not associated with NDVI change. For all the three strata, knowledge-based rules corrected misclassifications and ensured consistency between the maps. A comparison between 2018 and 2020 reveal important land cover dynamics related to vegetation loss and regeneration on ~5% of the country.authorsversionpublishe

    Mapping annual crops in Portugal with Sentinel-2 data

    Get PDF
    Benevides, P., Costa, H., Moreira, F. D., & Caetano, M. (2022). Mapping annual crops in Portugal with Sentinel-2 data. In C. M. U. Neale, & A. Maltese (Eds.), Proceedings of SPIE.Remote Sensing for Agriculture, Ecosystems, and Hydrology XXIV (Vol. 12262). SPIE. Society of Photo-Optical Instrumentation Engineers. https://doi.org/10.1117/12.2636125This paper presents an annual crop classification exercise considering the entire area of continental Portugal for the 2020 agricultural year. The territory was divided into landscape units, i.e. areas of similar landscape characteristics for independent training and classification. Data from the Portuguese Land Parcel Identification System (LPIS) was used for training. Thirty-one annual crops were identified for classification. Supervised classification was undertaken using Random Forest. A time-series of Sentinel-2 images was gathered and prepared. Automatic processes were applied to auxiliary datasets to improve the training data quality and lower class mislabeling. Automatic random extraction was employed to derive a large amount of sampling units for each annual crop class in each landscape unit. An LPIS dataset of controlled parcels was used for results validation. An overall accuracy of 85% is obtained for the map at national level indicating that the methodology is useful to identify and characterize most of annual crop types in Portugal. Class aggregation of the annual crop types by two types of growing season, autumn/winter and spring/summer, resulted in large improvements in the accuracy of almost all annual crops, and an overall accuracy improvement of 2%. This experiment shows that LPIS dataset can be used for training a supervised classifier based on machine learning with high-resolution remote sensing optical data, to produce a reliable crop map at national level.authorsversionpublishe

    Crop Phenology Modelling Using Proximal and Satellite Sensor Data

    Full text link
    peer reviewedUnderstanding crop phenology is crucial for predicting crop yields and identifying potential risks to food security. The objective was to investigate the effectiveness of satellite sensor data, compared to field observations and proximal sensing, in detecting crop phenological stages. Time series data from 122 winter wheat, 99 silage maize, and 77 late potato fields were analyzed during 2015–2017. The spectral signals derived from Digital Hemispherical Photographs (DHP), Disaster Monitoring Constellation (DMC), and Sentinel-2 (S2) were crop-specific and sensor-independent. Models fitted to sensor-derived fAPAR (fraction of absorbed photosynthetically active radiation) demonstrated a higher goodness of fit as compared to fCover (fraction of vegetation cover), with the best model fits obtained for maize, followed by wheat and potato. S2-derived fAPAR showed decreasing variability as the growing season progressed. The use of a double sigmoid model fit allowed defining inflection points corresponding to stem elongation (upward sigmoid) and senescence (downward sigmoid), while the upward endpoint corresponded to canopy closure and the maximum values to flowering and fruit development. Furthermore, increasing the frequency of sensor revisits is beneficial for detecting short-duration crop phenological stages. The results have implications for data assimilation to improve crop yield forecasting and agri-environmental modeling

    A hierarchical clustering method for land cover change detection and identification

    Get PDF
    A method to detect abrupt land cover changes using hierarchical clustering of multi-temporal satellite imagery was developed. The Autochange method outputs the pre-change land cover class, the change magnitude, and the change type. Pre-change land cover information is transferred to post-change imagery based on classes derived by unsupervised clustering, enabling using data from different instruments for pre- and post-change. The change magnitude and change types are computed by unsupervised clustering of the post-change image within each cluster, and by comparing the mean intensity values of the lower level clusters with their parent cluster means. A computational approach to determine the change magnitude threshold for the abrupt change was developed. The method was demonstrated with three summer image pairs Sentinel-2/Sentinel-2, Landsat 8/Sentinel-2, and Sentinel-2/ALOS 2 PALSAR in a study area of 12,372 km2 in southern Finland for the detection of forest clear cuts and tested with independent data. The Sentinel-2 classification produced an omission error of 5.6% for the cut class and 0.4% for the uncut class. Commission errors were 4.9% for the cut class and 0.4% for the uncut class. For the Landsat 8/Sentinel-2 classifications the equivalent figures were 20.8%, 0.2%, 3.4%, and 1.6% and for the Sentinel-2/ALOS PALSAR classification 16.7%, 1.4%, 17.8%, and 1.3%, respectively. The Autochange algorithm and its software implementation was considered applicable for the mapping of abrupt land cover changes using multi-temporal satellite data. It allowed mixing of images even from the optical and synthetic aperture radar (SAR) sensors in the same change analysis

    Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure

    No full text
    The Sentinel-2 satellite mission, developed by the European Space Agency (ESA) for the Copernicus program of the European Union, provides repetitive multi-spectral observations of all Earth land surfaces at a high resolution. The Level 2A product is a basic product requested by many Sentinel-2 users: it provides surface reflectance after atmospheric correction, with a cloud and cloud shadow mask. The cloud/shadow mask is a key element to enable an automatic processing of Sentinel-2 data, and therefore, its performances must be accurately validated. To validate the Sentinel-2 operational Level 2A cloud mask, a software program named Active Learning Cloud Detection (ALCD) was developed, to produce reference cloud masks. Active learning methods allow reducing the number of necessary training samples by iteratively selecting them where the confidence of the classifier is low in the previous iterations. The ALCD method was designed to minimize human operator time thanks to a manually-supervised active learning method. The trained classifier uses a combination of spectral and multi-temporal information as input features and produces fully-classified images. The ALCD method was validated using visual criteria, consistency checks, and compared to another manually-generated cloud masks, with an overall accuracy above 98%. ALCD was used to create 32 reference cloud masks, on 10 different sites, with different seasons and cloud cover types. These masks were used to validate the cloud and shadow masks produced by three Sentinel-2 Level 2A processors: MAJA, used by the French Space Agency (CNES) to deliver Level 2A products, Sen2Cor, used by the European Space Agency (ESA), and FMask, used by the United States Geological Survey (USGS). The results show that MAJA and FMask perform similarly, with an overall accuracy around 90% (91% for MAJA, 90% for FMask), while Sen2Cor’s overall accuracy is 84%. The reference cloud masks, as well as the ALCD software used to generate them are made available to the Sentinel-2 user community

    Cloud Mask Intercomparison eXercise (CMIX): An evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2

    Get PDF
    Cloud cover is a major limiting factor in exploiting time-series data acquired by optical spaceborne remote sensing sensors. Multiple methods have been developed to address the problem of cloud detection in satellite imagery and a number of cloud masking algorithms have been developed for optical sensors but very few studies have carried out quantitative intercomparison of state-of-the-art methods in this domain. This paper summarizes results of the first Cloud Masking Intercomparison eXercise (CMIX) conducted within the Committee Earth Observation Satellites (CEOS) Working Group on Calibration & Validation (WGCV). CEOS is the forum for space agency coordination and cooperation on Earth observations, with activities organized under working groups. CMIX, as one such activity, is an international collaborative effort aimed at intercomparing cloud detection algorithms for moderate-spatial resolution (10–30 m) spaceborne optical sensors. The focus of CMIX is on open and free imagery acquired by the Landsat 8 (NASA/USGS) and Sentinel-2 (ESA) missions. Ten algorithms developed by nine teams from fourteen different organizations representing universities, research centers and industry, as well as space agencies (CNES, ESA, DLR, and NASA), are evaluated within the CMIX. Those algorithms vary in their approach and concepts utilized which were based on various spectral properties, spatial and temporal features, as well as machine learning methods. Algorithm outputs are evaluated against existing reference cloud mask datasets. Those datasets vary in sampling methods, geographical distribution, sample unit (points, polygons, full image labels), and generation approaches (experts, machine learning, sky images). Overall, the performance of algorithms varied depending on the reference dataset, which can be attributed to differences in how the reference datasets were produced. The algorithms were in good agreement for thick cloud detection, which were opaque and had lower uncertainties in their identification, in contrast to thin/semi-transparent clouds detection. Not only did CMIX allow identification of strengths and weaknesses of existing algorithms and potential areas of improvements, but also the problems associated with the existing reference datasets. The paper concludes with recommendations on generating new reference datasets, metrics, and an analysis framework to be further exploited and additional input datasets to be considered by future CMIX activities
    corecore