4 research outputs found

    Mosaicado Multi-Temporal para Productos L2 de Sentinel 3/ FLEX

    Get PDF
    Treball Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIU043. Curs acadèmic: 2019-2020El aumento de datos en el campo de Remote Sensing en los últimos años está abriendo el paso a retos de estudios y análisis globales que antes éramos incapaces de afrontar debido a la falta de información disponible. Debido a esto, las técnicas existentes de mosaicado multitemporal se limitaban a producir composiciones de imágenes espectrales sin considerar características biofísicas de alto nivel como los que se obtienen a través de misiones como Sentinel-3 (S3) o la futura FLuorence EXplorer (FLEX). Este trabajo tiene como objetivo desarrollar un algoritmo de mosaicado multitemporal para productos derivados de S3, y estudiar el futuro uso de este para la misión FLEX. Concretamente se pretende diseñar una nueva metodología operacional para producir mosaicos multitemporales de productos derivados de forma automática, facilitando así el procesado de productos biofísicos de alto nivel para un día concreto, de forma semanal, mensual, estacional o anual. Es decir, automatizar todo el proceso desde la adquisición de datos hasta la obtención de los mosaicos multitemporales y el cálculo de confianza de estos

    Graph Relation Network: Modeling Relations between Scenes for Multi-Label Remote Sensing Image Classification and Retrieval

    Get PDF
    Due to the proliferation of large-scale remote-sensing (RS) archives with multiple annotations, multilabel RS scene classification and retrieval are becoming increasingly popular. Although some recent deep learning-based methods are able to achieve promising results in this context, the lack of research on how to learn embedding spaces under the multilabel assumption often makes these models unable to preserve complex semantic relations pervading aerial scenes, which is an important limitation in RS applications. To fill this gap, we propose a new graph relation network (GRN) for multilabel RS scene categorization. Our GRN is able to model the relations between samples (or scenes) by making use of a graph structure which is fed into network learning. For this purpose, we define a new loss function called scalable neighbor discriminative loss with binary cross entropy (SNDL-BCE) that is able to embed the graph structures through the networks more effectively. The proposed approach can guide deep learning techniques (such as convolutional neural networks) to a more discriminative metric space, where semantically similar RS scenes are closely embedded and dissimilar images are separated from a novel multilabel viewpoint. To achieve this goal, our GRN jointly maximizes a weighted leave-one-out K -nearest neighbors ( K NN) score in the training set, where the weight matrix describes the contributions of the nearest neighbors associated with each RS image on its class decision, and the likelihood of the class discrimination in the multilabel scenario. An extensive experimental comparison, conducted on three multilabel RS scene data archives, validates the effectiveness of the proposed GRN in terms of K NN classification and image retrieval. The codes of this article will be made publicly available for reproducible research in the community

    Remote Sensing Image Fusion Using Hierarchical Multimodal Probabilistic Latent Semantic Analysis

    No full text
    The generative semantic nature of probabilistic topic models has recently shown encouraging results within the remote sensing image fusion field when conducting land cover categorization. However, standard topic models have not yet been adapted to the inherent complexity of remotely sensed data, which eventually may limit their resulting performance. In this scenario, this paper presents a new topic-based image fusion framework, specially designed to fuse synthetic aperture radar (SAR) and multispectral imaging (MSI) data for unsupervised land cover categorization tasks. Specifically, we initially propose a hierarchical multi-modal probabilistic latent semantic analysis (HMpLSA) model that takes advantage of two different vocabulary modalities, as well as two different levels of topics, in order to effectively uncover intersensor semantic patterns. Then, we define an SAR and MSI data fusion framework based on HMpLSA in order to perform unsupervised land cover categorization. Our experiments, conducted using three different SAR and MSI data sets, reveal that the proposed approach is able to provide competitive advantages with respect to standard clustering methods and topic models, as well as several multimodal topic model variants available in the literature

    Remote Sensing Image Fusion Using Hierarchical Multimodal Probabilistic Latent Semantic Analysis

    No full text
    corecore