10 research outputs found

    Contrastive Learning for Regression on Hyperspectral Data

    Full text link
    Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations.Comment: Accepted in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 202

    Support for UNRWA's survival

    Get PDF
    The United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) provides life-saving humanitarian aid for 5·4 million Palestine refugees now entering their eighth decade of statelessness and conflict. About a third of Palestine refugees still live in 58 recognised camps. UNRWA operates 702 schools and 144 health centres, some of which are affected by the ongoing humanitarian disasters in Syria and the Gaza Strip. It has dramatically reduced the prevalence of infectious diseases, mortality, and illiteracy. Its social services include rebuilding infrastructure and homes that have been destroyed by conflict and providing cash assistance and micro-finance loans for Palestinians whose rights are curtailed and who are denied the right of return to their homeland

    End-to-End Convolutional Autoencoder for Nonlinear Hyperspectral Unmixing

    No full text
    Hyperspectral Unmixing is the process of decomposing a mixed pixel into its pure materials (endmembers) and estimating their corresponding proportions (abundances). Although linear unmixing models are more common due to their simplicity and flexibility, they suffer from many limitations in real world scenes where interactions between pure materials exist, which paved the way for nonlinear methods to emerge. However, existing methods for nonlinear unmixing require prior knowledge or an assumption about the type of nonlinearity, which can affect the results. This paper introduces a nonlinear method with a novel deep convolutional autoencoder for blind unmixing. The proposed framework consists of a deep encoder of successive small size convolutional filters along with max pooling layers, and a decoder composed of successive 2D and 1D convolutional filters. The output of the decoder is formed of a linear part and an additive non-linear one. The network is trained using the mean squared error loss function. Several experiments were conducted to evaluate the performance of the proposed method using synthetic and real airborne data. Results show a better performance in terms of abundance and endmembers estimation compared to several existing methods

    Unsupervised Domain Adaptation for Regression Using Dictionary Learning

    No full text
    International audienceUnsupervised domain adaptation aims to generalize the knowledge learned on a labeled source domain across an unlabeled target domain. Most of existing unsupervised approaches are feature-based methods that seek to find domain invariant features. Despite their wide applications, these approaches proved to have some limitations especially in regression tasks. In this paper, we study the problem of unsupervised domain adaptation for regression tasks. We highlight the obstacles faced in regression compared to a classification task in terms of sensitivity to the scattering of data in feature space. We take this issue and propose a new unsupervised domain adaptation approach based on dictionary learning. We seek to learn a dictionary on source data and follow an optimal direction trajectory to minimize the residue of the reconstruction of the target data with the same dictionary. For stable training of a neural network, we provide a robust implementation of a projected gradient descent dictionary learning framework, which allows to have a backpropagation friendly end-to-end method. Experimental results show that the proposed method outperforms significantly most of state-of-the-art methods on several well-known benchmark datasets, especially when transferring knowledge from synthetic to real domains

    Adaptation de domaine en régression par alignement de décompositions non-négatives

    No full text
    National audienceDomain Adaptation methods seek to generalize the knowledge learned on a labeled source domain across another unlabeled target domain. Most of the deep learning methods for domain adaptation address the classification task, while regression models are still one step behind with some positive results in a shallow framework. Existing deep models for regression adaptation tasks rely on aligning the eigenvectors of both source and target data. This process, although providing satisfying results, is however unstable and not a backpropagation-friendly process. In this paper, we present a novel deep adaptation model based on aligning the non-negative sub-spaces derived from source and target domains. Removing the orthogonality constraints makes the model more stable for training. The proposed method is evaluated on a domain adaptation regression benchmark. Results show competitive performance compared to state-of-the-art models.-Les approches d'adaptation de domaine cherchent à généraliser les connaissances acquises dans un domaine source étiqueté à un autre domaine cible non étiqueté. La plupart des approches d'apprentissage profond avec adaptation de domaine visent la tâche de classification. Parmi les modèles profonds existant pour les tâches de régression, l'adaptation de domaine peut être réalisée en alignant les vecteurs propres des données source et cible. Ce processus, bien qu'il donne des résultats acceptables, est cependant instable et n'est pas favorable à la rétropropagation du gradient. Dans cet article, nous présentons un nouveau modèle d'adaptation de domaine pour réseau de neurones profond basé sur l'alignement des sous-espaces non négatifs dérivés des domaines source et cible. L'élimination des contraintes d'orthogonalité rend le modèle plus stable en apprentissage. Notre approche est évaluée sur un jeu de données de référence pour l'adaptation de domaine en régression. Les résultats montrent des performances compétitives par rapport aux autres modèles de pointe

    End-to-End Convolutional Autoencoder for Nonlinear Hyperspectral Unmixing

    No full text
    International audienceHyperspectral Unmixing is the process of decomposing a mixed pixel into its pure materials (endmembers) and estimating their corresponding proportions (abundances). Although linear unmixing models are more common due to their simplicity and flexibility, they suffer from many limitations in real world scenes where interactions between pure materials exist, which paved the way for nonlinear methods to emerge. However, existing methods for nonlinear unmixing require prior knowledge or an assumption about the type of nonlinearity, which can affect the results. This paper introduces a nonlinear method with a novel deep convolutional autoencoder for blind unmixing. The proposed framework consists of a deep encoder of successive small size convolutional filters along with max pooling layers, and a decoder composed of successive 2D and 1D convolutional filters. The output of the decoder is formed of a linear part and an additive non-linear one. The network is trained using the mean squared error loss function. Several experiments were conducted to evaluate the performance of the proposed method using synthetic and real airborne data. Results show a better performance in terms of abundance and endmembers estimation compared to several existing methods

    CONTRASTIVE LEARNING FOR REGRESSION ON HYPERSPECTRAL DATA

    No full text
    International audienceContrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations

    Apprentissage contrastif pour l'adaptation de domaine en régression

    No full text
    -L'adaptation de domaine non supervisée relève le défi d'utiliser des modèles d'apprentissage statistique sur des données de distribution différente de celle des données d'entraînement. Cela impose d'apprendre des représentations efficaces qui peuvent être généralisées à travers les domaines. Dans cet article, nous étudions l'utilisation de l'apprentissage contrastif pour améliorer les approches d'adaptation de domaine. À cette fin, l'apprentissage contrastif est appliqué à l'espace latent d'un réseau de neurones, où l'objectif est d'apprendre une représentation qui maximise la similitude entre des exemples similaires et minimise la similitude entre des exemples dissemblables. En outre, pour minimiser l'écart entre les domaines source et cible, le procédé utilise l'apprentissage par dictionnaire, où les dictionnaires sont extraits à la fois des données source et cible et la trajectoire entre les deux dictionnaires est minimisée. La méthode proposée est évaluée sur le jeu de données dSprites, montrant de meilleures performances que les méthodes de l'état de l'art
    corecore