25 research outputs found

    KappaMask: AI-Based Cloudmask Processor for Sentinel-2

    Get PDF
    The Copernicus Sentinel-2 mission operated by the European Space Agency (ESA) provides comprehensive and continuous multi-spectral observations of all the Earth's land surface since mid-2015. Clouds and cloud shadows significantly decrease the usability of optical satellite data, especially in agricultural applications; therefore, an accurate and reliable cloud mask is mandatory for effective EO optical data exploitation. During the last few years, image segmentation techniques have developed rapidly with the exploitation of neural network capabilities. With this perspective, the KappaMask processor using U-Net architecture was developed with the ability to generate a classification mask over northern latitudes into the following classes: clear, cloud shadow, semi-transparent cloud (thin clouds), cloud and invalid. For training, a Sentinel-2 dataset covering the Northern European terrestrial area was labelled. KappaMask provides a 10 m classification mask for Sentinel-2 Level-2A (L2A) and Level-1C (L1C) products. The total dice coefficient on the test dataset, which was not seen by the model at any stage, was 80% for KappaMask L2A and 76% for KappaMask L1C for clear, cloud shadow, semi-transparent and cloud classes. A comparison with rule-based cloud mask methods was then performed on the same test dataset, where Sen2Cor reached 59% dice coefficient for clear, cloud shadow, semi-transparent and cloud classes, Fmask reached 61% for clear, cloud shadow and cloud classes and Maja reached 51% for clear and cloud classes. The closest machine learning open-source cloud classification mask, S2cloudless, had a 63% dice coefficient providing only cloud and clear classes, while KappaMask L2A, with a more complex classification schema, outperformed S2cloudless by 17%

    Machine learning for cloud detection of globally distributed sentinel-2 images

    Get PDF
    In recent years, a number of different procedures have been proposed for segmentation of remote sensing images, basing on spectral information. Model-based and machine learning strategies have been investigated in several studies. This work presents a comprehensive overview and an unbiased comparison of the most adopted segmentation strategies: Support Vector Machines (SVM), Random Forests, Neural networks, Sen2Cor, FMask and MAJA. We used a training set for learning and two different independent sets for testing. The comparison accounted for 135 images acquired from 54 different worldwide sites. We observed that machine learning segmentations are extremely reliable when the training and test are homogeneous. SVM performed slightly better than other methods. In particular, when using heterogeneous test data, SVM remained the most accurate segmentation method while state-of-the-art model-based methods such as MAJA and FMask obtained better sensitivity and precision, respectively. Therefore, even if each method has its specific advantages and drawbacks, SVM resulted in a competitive option for remote sensing applications

    Machine Learning for Cloud Detection of Globally Distributed Sentinel-2 Images

    Get PDF
    In recent years, a number of different procedures have been proposed for segmentation of remote sensing images, basing on spectral information. Model-based and machine learning strategies have been investigated in several studies. This work presents a comprehensive overview and an unbiased comparison of the most adopted segmentation strategies: Support Vector Machines (SVM), Random Forests, Neural networks, Sen2Cor, FMask and MAJA. We used a training set for learning and two different independent sets for testing. The comparison accounted for 135 images acquired from 54 different worldwide sites. We observed that machine learning segmentations are extremely reliable when the training and test are homogeneous. SVM performed slightly better than other methods. In particular, when using heterogeneous test data, SVM remained the most accurate segmentation method while state-of-the-art model-based methods such as MAJA and FMask obtained better sensitivity and precision, respectively. Therefore, even if each method has its specific advantages and drawbacks, SVM resulted in a competitive option for remote sensing applications

    Cloud Mask Intercomparison eXercise (CMIX): An evaluation of cloud masking algorithms for Landsat 8 and Sentinel-2

    Get PDF
    Cloud cover is a major limiting factor in exploiting time-series data acquired by optical spaceborne remote sensing sensors. Multiple methods have been developed to address the problem of cloud detection in satellite imagery and a number of cloud masking algorithms have been developed for optical sensors but very few studies have carried out quantitative intercomparison of state-of-the-art methods in this domain. This paper summarizes results of the first Cloud Masking Intercomparison eXercise (CMIX) conducted within the Committee Earth Observation Satellites (CEOS) Working Group on Calibration & Validation (WGCV). CEOS is the forum for space agency coordination and cooperation on Earth observations, with activities organized under working groups. CMIX, as one such activity, is an international collaborative effort aimed at intercomparing cloud detection algorithms for moderate-spatial resolution (10–30 m) spaceborne optical sensors. The focus of CMIX is on open and free imagery acquired by the Landsat 8 (NASA/USGS) and Sentinel-2 (ESA) missions. Ten algorithms developed by nine teams from fourteen different organizations representing universities, research centers and industry, as well as space agencies (CNES, ESA, DLR, and NASA), are evaluated within the CMIX. Those algorithms vary in their approach and concepts utilized which were based on various spectral properties, spatial and temporal features, as well as machine learning methods. Algorithm outputs are evaluated against existing reference cloud mask datasets. Those datasets vary in sampling methods, geographical distribution, sample unit (points, polygons, full image labels), and generation approaches (experts, machine learning, sky images). Overall, the performance of algorithms varied depending on the reference dataset, which can be attributed to differences in how the reference datasets were produced. The algorithms were in good agreement for thick cloud detection, which were opaque and had lower uncertainties in their identification, in contrast to thin/semi-transparent clouds detection. Not only did CMIX allow identification of strengths and weaknesses of existing algorithms and potential areas of improvements, but also the problems associated with the existing reference datasets. The paper concludes with recommendations on generating new reference datasets, metrics, and an analysis framework to be further exploited and additional input datasets to be considered by future CMIX activities

    AgroShadow: A New Sentinel-2 Cloud Shadow Detection Tool for Precision Agriculture

    Get PDF
    Remote sensing for precision agriculture has been strongly fostered by the launches of the European Space Agency Sentinel-2 optical imaging constellation, enabling both academic and private services for redirecting farmers towards a more productive and sustainable management of the agroecosystems. As well as the freely and open access policy adopted by the European Space Agency (ESA), software and tools are also available for data processing and deeper analysis. Nowadays, a bottleneck in this valuable chain is represented by the difficulty in shadow identification of Sentinel-2 data that, for precision agriculture applications, results in a tedious problem. To overcome the issue, we present a simplified tool, AgroShadow, to gain full advantage from Sentinel-2 products and solve the trade-off between omission errors of Sen2Cor (the algorithm used by the ESA) and commission errors of MAJA (the algorithm used by Centre National d'Etudes Spatiales/Deutsches Zentrum für Luft- und Raumfahrt, CNES/DLR). AgroShadow was tested and compared against Sen2Cor and MAJA in 33 Sentinel 2A-B scenes, covering the whole of 2020 and in 18 different scenarios of the whole Italian country at farming scale. AgroShadow returned the lowest error and the highest accuracy and F-score, while precision, recall, specificity, and false positive rates were always similar to the best scores which alternately were returned by Sen2Cor or MAJA

    Sentinel-2 Image Scene Classification: A Comparison between Sen2Cor and a Machine Learning Approach

    Get PDF
    Given the continuous increase in the global population, the food manufacturers are advocated to either intensify the use of cropland or expand the farmland, making land cover and land usage dynamics mapping vital in the area of remote sensing. In this regard, identifying and classifying a high-resolution satellite imagery scene is a prime challenge. Several approaches have been proposed either by using static rule-based thresholds (with limitation of diversity) or neural network (with data-dependent limitations). This paper adopts the inductive approach to learning from surface reflectances. A manually labeled Sentinel-2 dataset was used to build a Machine Learning (ML) model for scene classification, distinguishing six classes (Water, Shadow, Cirrus, Cloud, Snow, and Other). This models was accessed and further compared to the European Space Agency (ESA) Sen2Cor package. The proposed ML model presents a Micro-F1 value of 0.84, a considerable improvement when compared to the Sen2Cor corresponding performance of 0.59. Focusing on the problem of optical satellite image scene classification, the main research contributions of this paper are: (a) an extended manually labeled Sentinel-2 database adding surface reflectance values to an existing dataset; (b) an ensemble-based and a Neural-Network-based ML models; (c) an evaluation of model sensitivity, biasness, and diverse ability in classifying multiple classes over different geographic Sentinel-2 imagery, and finally, (d) the benchmarking of the ML approach against the Sen2Cor package

    Snow Coverage Mapping by Learning from Sentinel-2 Satellite Multispectral Images via Machine Learning Algorithms

    Get PDF
    Snow coverage mapping plays a vital role not only in studying hydrology and climatology, but also in investigating crop disease overwintering for smart agriculture management. This work investigates snow coverage mapping by learning from Sentinel-2 satellite multispectral images via machine-learning methods. To this end, the largest dataset for snow coverage mapping (to our best knowledge) with three typical classes (snow, cloud and background) is first collected and labeled via the semi-automatic classification plugin in QGIS. Then, both random forest-based conventional machine learning and U-Net-based deep learning are applied to the semantic segmentation challenge in this work. The effects of various input band combinations are also investigated so that the most suitable one can be identified. Experimental results show that (1) both conventional machine-learning and advanced deep-learning methods significantly outperform the existing rule-based Sen2Cor product for snow mapping; (2) U-Net generally outperforms the random forest since both spectral and spatial information is incorporated in U-Net via convolution operations; (3) the best spectral band combination for U-Net is B2, B11, B4 and B9. It is concluded that a U-Net-based deep-learning classifier with four informative spectral bands is suitable for snow coverage mapping.</jats:p

    Theia Snow collection: high-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data

    Get PDF
    The Theia Snow collection routinely provides high-resolution maps of the snow-covered area from Sentinel-2 and Landsat-8 observations. The collection covers selected areas worldwide, including the main mountain regions in western Europe (e.g. Alps, Pyrenees) and the High Atlas in Morocco. Each product of the Theia Snow collection contains four classes: snow, no snow, cloud and no data. We present the algorithm to generate the snow products and provide an evaluation of the accuracy of Sentinel-2 snow products using in situ snow depth measurements, higher-resolution snow maps and visual control. The results suggest that the snow is accurately detected in the Theia snow collection and that the snow detection is more accurate than the Sen2Cor outputs (ESA level 2 product). An issue that should be addressed in a future release is the occurrence of false snow detection in some large clouds. The snow maps are currently produced and freely distributed on average 5&thinsp;d after the image acquisition as raster and vector files via the Theia portal (https://doi.org/10.24400/329360/F7Q52MNK).</p

    Detection and classification of changes in agriculture, forest, and shrublands for land cover map updating in Portugal

    Get PDF
    Costa, H., Benevides, P., Moreira, F. D., & Caetano, M. (2022). Detection and classification of changes in agriculture, forest, and shrublands for land cover map updating in Portugal. In C. M. U. Neale, & A. Maltese (Eds.), Proceedings of SPIE.Remote Sensing for Agriculture, Ecosystems, and Hydrology XXIV (Vol. 12262, pp. 19). SPIE. Society of Photo-Optical Instrumentation Engineers. https://doi.org/10.1117/12.2636127Portugal produced a land cover map for 2018 based on Sentinel-2 data and represents 13 classes, including agriculture, six tree forest species, and shrubland. The map was updated for 2020. The strategy focused on three strata where annual changes occur: S1 (agriculture) due to crop rotation, S2 (forest and shrubland) due to wildfires and clear-cuts, and S3 (fire scars and clear-cuts of previous years) where vegetation regeneration occurs. The methodology included i) change detection, ii) classification, and iii) knowledge-based rules. Stratum S1 was classified with images of the entire 2020 crop year and a training dataset extracted from the national Land Parcel Identification Systems (LPIS) of 2020. The land cover nomenclature was expanded and class agriculture was split in three distinct classes, hence resulting a map with 15 classes in total. Change detection, implemented in stratum S2, analyzed the profile of NDVI since 2018 to find potential loss of vegetation. S2 and S3 were classified through two stages. First, images of the entire 2020 crop year were used and then data of October 2020 (end of crop year) to capture late changes. The training points of the 2018 land cover map were used, but only if not associated with NDVI change. For all the three strata, knowledge-based rules corrected misclassifications and ensured consistency between the maps. A comparison between 2018 and 2020 reveal important land cover dynamics related to vegetation loss and regeneration on ~5% of the country.authorsversionpublishe
    corecore