52 research outputs found

    Superresolution Enhancement of Hyperspectral CHRIS/Proba Images With a Thin-Plate Spline Nonrigid Transform Model

    Get PDF
    Given the hyperspectral-oriented waveband configuration of multiangular CHRIS/Proba imagery, the scope of its application could widen if the present 18-m resolution would be improved. The multiangular images of CHRIS could be used as input for superresolution (SR) image reconstruction. A critical procedure in SR is an accurate registration of the low-resolution images. Conventional methods based on affine transformation may not be effective given the local geometric distortion in high off-nadir angular images. This paper examines the use of a non-rigid transform to improve the result of a nonuniform interpolation and deconvolution SR method. A scale-invariant feature transform is used to collect control points (CPs). To ensure the quality of CPs, a rigorous screening procedure is designed: 1) an ambiguity test; 2) the m-estimator sample consensus method; and 3) an iterative method using statistical characteristics of the distribution of random errors. A thin-plate spline (TPS) nonrigid transform is then used for the registration. The proposed registration method is examined with a Delaunay triangulation-based nonuniform interpolation and reconstruction SR method. Our results show that the TPS nonrigid transform allows accurate registration of angular images. SR results obtained from simulated LR images are evaluated using three quantitative measures, namely, relative mean-square error, structural similarity, and edge stability. Compared to the SR methods that use an affine transform, our proposed method performs better with all three evaluation measures. With a higher level of spatial detail, SR-enhanced CHRIS images might be more effective than the original data in various applications.JRC.H.7-Climate Risk Managemen

    Crop classification from Sentinel-2 time series with temporal convolutional neural networks

    Get PDF
    Automated crop identification tools are of interest to a wide range of applications related to the environment and agriculture including the monitoring of related policies such as the European Common Agriculture Policy. In this context, this work presents a parcel-based crop classification system which leverages on 1D convolutional neural network supervised learning capacity. For the training and evaluation of the model, we employ open and free data: (i) time series of Sentinel-2 optical data selected to cover the crop season of one year, and (ii) a cadastre-derived database providing detailed delineation of parcels. By considering the most dominant crop types and the temporal features of the optical data, the proposed lightweight approach discriminates a considerable number of crops with high accuracy

    AI4Boundaries: an open AI-ready dataset to map field boundaries with Sentinel-2 and aerial photography

    Get PDF
    Field boundaries are at the core of many agricultural applications and are a key enabler for the operational monitoring of agricultural production to support food security. Recent scientific progress in deep learning methods has highlighted the capacity to extract field boundaries from satellite and aerial images with a clear improvement from object-based image analysis (e.g. multiresolution segmentation) or conventional filters (e.g. Sobel filters). However, these methods need labels to be trained on. So far, no standard data set exists to easily and robustly benchmark models and progress the state of the art. The absence of such benchmark data further impedes proper comparison against existing methods. Besides, there is no consensus on which evaluation metrics should be reported (both at the pixel and field levels). As a result, it is currently impossible to compare and benchmark new and existing methods. To fill these gaps, we introduce AI4Boundaries, a data set of images and labels readily usable to train and compare models on field boundary detection. AI4Boundaries includes two specific data sets: (i) a 10 m Sentinel-2 monthly composites for large-scale analyses in retrospect and (ii) a 1 m orthophoto data set for regional-scale analyses, such as the automatic extraction of Geospatial Aid Application (GSAA). All labels have been sourced from GSAA data that have been made openly available (Austria, Catalonia, France, Luxembourg, the Netherlands, Slovenia, and Sweden) for 2019, representing 14.8 M parcels covering 376 K km2. Data were selected following a stratified random sampling drawn based on two landscape fragmentation metrics, the perimeter/area ratio and the area covered by parcels, thus considering the diversity of the agricultural landscapes. The resulting “AI4Boundaries” dataset consists of 7831 samples of 256 by 256 pixels for the 10 m Sentinel-2 dataset and of 512 by 512 pixels for the 1 m aerial orthophoto. Both datasets are provided with the corresponding vector ground-truth parcel delineation (2.5 M parcels covering 47 105 km2), and with a raster version already pre-processed and ready to use. Besides providing this open dataset to foster computer vision developments of parcel delineation methods, we discuss the perspectives and limitations of the dataset for various types of applications in the agriculture domain and consider possible further improvements. The data are available on the JRC Open Data Catalogue: http://data.europa.eu/89h/0e79ce5d-e4c8-4721-8773-59a4acf2c9c9 (European Commission, Joint Research Centre, 2022).</p

    Tree species mapping by combining hyperspectral with LiDAR data

    Get PDF
    This study deals with data fusion of hyperspectral and LiDAR sensors for forest applications. In particular, the added value of different data sources on tree species mapping has been analyzed. A total of seven species have been mapped for a forested area in Belgium: Beech, Ash, Larch, Poplar, Copper beech, Chestnut and Oak. Hyperspectral data is obtained from the APEX sensor in 286 spectral bands. LiDAR data has been acquired with a TopoSys sensor Harrier 56 at full waveform. Confirming previous research [1], it has been found that airborne LiDAR data, when combined with hyperspectral data, can improve classification results. The novelty of this study is in the quantification of the contribution of the individual data sources and their derived parameters. LiDAR information was combined with the hyperspectral image in a data fusion approach. Different data fusion techniques were tested, including feature and decision fusion. Decision fucsion produced optimal results, reaching an overall accuracy of 96% (Kappa [3] of 0:95)

    Chlorophyll content estimation in an open-canopy conifer forest with Sentinel-2A and hyperspectral imagery in the context of forest decline

    Get PDF
    With the advent of Sentinel-2, it is now possible to generate large-scale chlorophyll content maps with unprecedented spatial and temporal resolution, suitable for monitoring ecological processes such as vegetative stress and/or decline. However methodological gaps exist for adapting this technology to heterogeneous natural vegetation and for transferring it among vegetation species or plan functional types. In this study, we investigated the use of Sentinel-2A imagery for estimating needle chlorophyll (Ca+b) in a sparse pine forest undergoing significant needle loss and tree mortality. Sentinel-2A scenes were acquired under two extreme viewing geometries (June vs. December 2016) coincident with the acquisition of high-spatial resolution hyperspectral imagery, and field measurements of needle chlorophyll content and crown leaf area index. Using the high resolution hyperspectral scenes acquired over 61 validation sites we found the CI chlorophyll index R750/R710 and Macc index (which uses spectral bands centered at 680 nm, 710 nm and 780 nm) had the strongest relationship with needle chlorophyll content from individual tree crowns (r2=0.61 and r2=0.59, respectively; p 0.7 for June and>0.4 for December; p < 0.001). The retrieval of needle chlorophyll content from the entire Sentinel-2A bandset using the radiative transfer model INFORM yielded r2=0.71 (RMSE=8.1 μg/cm2) for June, r2=0.42 (RMSE=12.2 μg/cm2) for December, and r2=0.6 (RMSE=10.5 μg/cm2) as overall performance using the June and December datasets together. This study demonstrates the retrieval of leaf Ca+b with Sentinel-2A imagery by red-edge indices and by an inversion method based on a hybrid canopy reflectance model that accounts for tree density, background and shadow components common in sparse forest canopies.JRC.D.1-Bio-econom

    Optimizing Sentinel-2 image selection in a big data context

    No full text
    Processing large amounts of image data such as the Sentinel-2 archive is a computationally demanding task. However, for most applications, many of the images in the archive are redundant and do not contribute to the quality of the final result. An optimization scheme is presented here that selects a subset of the Sentinel-2 archive in order to reduce the amount of processing, while retaining the quality of the resulting output. As a case study, we focused on the creation of a cloud-free composite, covering the global land mass and based on all the images acquired from January 2016 until September 2017. The total amount of available images was 2,128,556. The selection of the optimal subset was based on quicklooks, which correspond to a spatial and spectral subset of the original Sentinel-2 products and are lossy compressed. The selected subset contained 94,093 image tiles in total, reducing the amount of images to be processed to 4.42% of the full set.JRC.I.3-Text and Data Minin

    Mosaic of Copernicus Sentinel -2 data at global scale

    No full text
    Global cloud free mosaic based on a minimum number of Copernicus Sentinel-2 products.JRC.I.3-Text and Data Minin
    corecore