6 research outputs found

    Intersensor Remote Sensing Image Registration Using Multispectral Semantic Embeddings

    Get PDF
    This letter presents a novel intersensor registration framework specially designed to register Sentinel-3 (S3) operational data using the Sentinel-2 (S2) instrument as a reference. The substantially higher resolution of the Multispectral Instrument (MSI), on-board S2, with respect to the Ocean and Land Color Instrument (OLCI), carried by S3, makes the former sensor a suitable spatial reference to finely adjust OLCI products. Nonetheless, the important spectral-spatial differences between both instruments may constrain traditional registration mechanisms to effectively align data of such different nature. In this context, the proposed registration scheme advocates the use of a topic model-based embedding approach to conduct the intersensor registration task within a common multispectral semantic space, where the input imagery is represented according to their corresponding spectral feature patterns instead of the low-level attributes. Thus, the OLCI products can be effectively registered to the MSI reference data by aligning those hidden patterns that fundamentally express the same visual concepts across the sensors. The experiments, conducted over four different S2 and S3 operational data collections, reveal that the proposed approach provides performance advantages over six different intersensor registration counterparts

    W-NetPan: Double-U network for inter-sensor self-supervised pan-sharpening

    Get PDF
    The increasing availability of remote sensing data allows dealing with spatial-spectral limitations by means of pan-sharpening methods. However, fusing inter-sensor data poses important challenges, in terms of resolution differences, sensor-dependent deformations and ground-truth data availability, that demand more accurate pan-sharpening solutions. In response, this paper proposes a novel deep learning-based pan-sharpening model which is termed as the double-U network for self-supervised pan-sharpening (W-NetPan). In more details, the proposed architecture adopts an innovative W-shape that integrates two U-Net segments which sequentially work for spatially matching and fusing inter-sensor multi-modal data. In this way, a synergic effect is produced where the first segment resolves inter-sensor deviations while stimulating the second one to achieve a more accurate data fusion. Additionally, a joint loss formulation is proposed for effectively training the proposed model without external data supervision. The experimental comparison, conducted over four coupled Sentinel-2 and Sentinel-3 datasets, reveals the advantages of W-NetPan with respect to several of the most important state-of-the-art pan-sharpening methods available in the literature. The codes related to this paper will be available at https://github.com/rufernan/WNetPan

    Sentinel-3/FLEX Biophysical Product Confidence Using Sentinel-2 Land-Cover Spatial Distributions

    Get PDF
    The estimation of biophysical variables from remote sensing data raises important challenges in terms of the acquisition technology and its limitations. In this way, some vegetation parameters, such as chlorophyll fluorescence, require sensors with a high spectral resolution that constrains the spatial resolution while significantly increasing the subpixel land-cover heterogeneity. Precisely, this spatial variability often makes that rather different canopy structures are aggregated together, which eventually generates important deviations in the corresponding parameter quantification. In the context of the Copernicus program (and other related Earth Explorer missions), this article proposes a new statistical methodology to manage the subpixel spatial heterogeneity problem in Sentinel-3 (S3) and FLuorescence EXplorer (FLEX) by taking advantage of the higher spatial resolution of Sentinel-2 (S2). Specifically, the proposed approach first characterizes the subpixel spatial patterns of S3/FLEX using inter-sensor data from S2. Then, a multivariate analysis is conducted to model the influence of these spatial patterns in the errors of the estimated biophysical variables related to chlorophyll which are used as fluorescence proxies. Finally, these modeled distributions are employed to predict the confidence of S3/FLEX products on demand. Our experiments, conducted using multiple operational S2 and simulated S3 data products, reveal the advantages of the proposed methodology to effectively measure the confidence and expected deviations of different vegetation parameters with respect to standard regression algorithms. The source codes of this work will be available at https://github.com/rufernan/PixelS3

    Multitemporal Mosaicing for Sentinel-3/FLEX Derived Level-2 Product Composites

    Get PDF
    The increasing availability of remote sensing data raises important challenges in terms of operational data provision and spatial coverage for conducting global studies and analyses. In this regard, existing multitemporal mosaicing techniques are generally limited to producing spectral image composites without considering the particular features of higher-level biophysical and other derived products, such as those provided by the Sentinel-3 (S3) and Fluorescence Explorer (FLEX) tandem missions. To relieve these limitations, this article proposes a novel multitemporal mosaicing algorithm specially designed for operational S3-derived products and also studies its applicability within the FLEX mission context. Specifically, we design a new operational methodology to automatically produce multitemporal mosaics from derived S3/FLEX products with the objective of facilitating the automatic processing of high-level data products, where weekly, monthly, seasonal, or annual biophysical mosaics can be generated by means of four processes proposed in this work: 1) operational data acquisition; 2) spatial mosaicing and rearrangement; 3) temporal compositing; and 4) confidence measures. The experimental part of the work tests the consistency of the proposed framework over different S3 product collections while showing its advantages with respect to other standard mosaicing alternatives. The source codes of this work will be made available for reproducible research

    Robust Normalized Softmax Loss for Deep Metric Learning-Based Characterization of Remote Sensing Images With Label Noise

    Get PDF
    Most deep metric learning-based image characterization methods exploit supervised information to model the semantic relations among the remote sensing (RS) scenes. Nonetheless, the unprecedented availability of large-scale RS data makes the annotation of such images very challenging, requiring automated supportive processes. Whether the annotation is assisted by aggregation or crowd-sourcing, the RS large-variance problem, together with other important factors [e.g., geo-location/registration errors, land-cover changes, even low-quality Volunteered Geographic Information (VGI), etc.] often introduce the so-called label noise, i.e., semantic annotation errors. In this article, we first investigate the deep metric learning-based characterization of RS images with label noise and propose a novel loss formulation, named robust normalized softmax loss (RNSL), for robustly learning the metrics among RS scenes. Specifically, our RNSL improves the robustness of the normalized softmax loss (NSL), commonly utilized for deep metric learning, by replacing its logarithmic function with the negative Box–Cox transformation in order to down-weight the contributions from noisy images on the learning of the corresponding class prototypes. Moreover, by truncating the loss with a certain threshold, we also propose a truncated robust normalized softmax loss (t-RNSL) which can further enforce the learning of class prototypes based on the image features with high similarities between them, so that the intraclass features can be well grouped and interclass features can be well separated. Our experiments, conducted on two benchmark RS data sets, validate the effectiveness of the proposed approach with respect to different state-of-the-art methods in three different downstream applications (classification, clustering, and retrieval). The codes of this article will be publicly available from https://github.com/jiankang1991

    Mosaicado Multi-Temporal para Productos L2 de Sentinel 3/ FLEX

    Get PDF
    Treball Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIU043. Curs acadèmic: 2019-2020El aumento de datos en el campo de Remote Sensing en los últimos años está abriendo el paso a retos de estudios y análisis globales que antes éramos incapaces de afrontar debido a la falta de información disponible. Debido a esto, las técnicas existentes de mosaicado multitemporal se limitaban a producir composiciones de imágenes espectrales sin considerar características biofísicas de alto nivel como los que se obtienen a través de misiones como Sentinel-3 (S3) o la futura FLuorence EXplorer (FLEX). Este trabajo tiene como objetivo desarrollar un algoritmo de mosaicado multitemporal para productos derivados de S3, y estudiar el futuro uso de este para la misión FLEX. Concretamente se pretende diseñar una nueva metodología operacional para producir mosaicos multitemporales de productos derivados de forma automática, facilitando así el procesado de productos biofísicos de alto nivel para un día concreto, de forma semanal, mensual, estacional o anual. Es decir, automatizar todo el proceso desde la adquisición de datos hasta la obtención de los mosaicos multitemporales y el cálculo de confianza de estos
    corecore