64 research outputs found

    Inertia-Constrained Pixel-by-Pixel Nonnegative Matrix Factorisation: a Hyperspectral Unmixing Method Dealing with Intra-class Variability

    Full text link
    Blind source separation is a common processing tool to analyse the constitution of pixels of hyperspectral images. Such methods usually suppose that pure pixel spectra (endmembers) are the same in all the image for each class of materials. In the framework of remote sensing, such an assumption is no more valid in the presence of intra-class variabilities due to illumination conditions, weathering, slight variations of the pure materials, etc... In this paper, we first describe the results of investigations highlighting intra-class variability measured in real images. Considering these results, a new formulation of the linear mixing model is presented leading to two new methods. Unconstrained Pixel-by-pixel NMF (UP-NMF) is a new blind source separation method based on the assumption of a linear mixing model, which can deal with intra-class variability. To overcome UP-NMF limitations an extended method is proposed, named Inertia-constrained Pixel-by-pixel NMF (IP-NMF). For each sensed spectrum, these extended versions of NMF extract a corresponding set of source spectra. A constraint is set to limit the spreading of each source's estimates in IP-NMF. The methods are tested on a semi-synthetic data set built with spectra extracted from a real hyperspectral image and then numerically mixed. We thus demonstrate the interest of our methods for realistic source variabilities. Finally, IP-NMF is tested on a real data set and it is shown to yield better performance than state of the art methods

    Blind hyperspectral unmixing using an Extended Linear Mixing Model to address spectral variability

    No full text
    International audienceSpectral Unmixing is one of the main research topics in hyperspectral imaging. It can be formulated as a source separation problem whose goal is to recover the spectral signatures of the materials present in the observed scene (called endmembers) as well as their relative proportions (called fractional abundances), and this for every pixel in the image. A Linear Mixture Model is often used for its simplicity and ease of use but it implicitly assumes that a single spectrum can be completely representative of a material. However, in many scenarios, this assumption does not hold since many factors, such as illumination conditions and intrinsic variability of the endmembers, induce modifications on the spectral signatures of the materials. In this paper, we propose an algorithm to unmix hyperspectral data using a recently proposed Extended Linear Mixing Model. The proposed approach allows a pixelwise spatially coherent local variation of the endmembers, leading to scaled versions of reference endmembers. We also show that the classic nonnegative least squares, as well as other approaches to tackle spectral variability can be interpreted in the framework of this model. The results of the proposed algorithm on two different synthetic datasets, including one simulating the effect of topography on the measured reflectance through physical modelling, and on two real datasets, show that the proposed technique outperforms other methods aimed at addressing spectral variability, and can provide an accurate estimation of endmember variability along the scene thanks to the scaling factors estimation

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    Hyperspectral unmixing with material variability using social sparsity

    No full text
    International audienceWe apply social-norms for the first time to the problem of hyperspectral unmixing while modeling spectral variability. These norms are built with inter-group penalties which are combined in a global intra-group penalization that can enforce selection of entire endmember bundles; this results in the selection of a few representative materials even in the presence of large endmembers bundles capturing each material's variability. We demonstrate improvements quantitatively on synthetic data and qualitatively on real data for three cases of social norms: group, elitist, and a fractional social norm, respectively. We find that the greatest improvements arise from using either the group or fractional flavor

    Unmixing multitemporal hyperspectral images accounting for smooth and abrupt variations

    Get PDF
    A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods

    MDAS: a new multimodal benchmark dataset for remote sensing

    Get PDF
    In Earth observation, multimodal data fusion is an intuitive strategy to break the limitation of individual data. Complementary physical contents of data sources allow comprehensive and precise information retrieval. With current satellite missions, such as ESA Copernicus programme, various data will be accessible at an affordable cost. Future applications will have many options for data sources. Such a privilege can be beneficial only if algorithms are ready to work with various data sources. However, current data fusion studies mostly focus on the fusion of two data sources. There are two reasons; first, different combinations of data sources face different scientific challenges. For example, the fusion of synthetic aperture radar (SAR) data and optical images needs to handle the geometric difference, while the fusion of hyperspectral and multispectral images deals with different resolutions on spatial and spectral domains. Second, nowadays, it is still both financially and labour expensive to acquire multiple data sources for the same region at the same time. In this paper, we provide the community with a benchmark multimodal data set, MDAS, for the city of Augsburg, Germany. MDAS includes synthetic aperture radar data, multispectral image, hyperspectral image, digital surface model (DSM), and geographic information system (GIS) data. All these data are collected on the same date, 7 May 2018. MDAS is a new benchmark data set that provides researchers rich options on data selections. In this paper, we run experiments for three typical remote sensing applications, namely, resolution enhancement, spectral unmixing, and land cover classification, on MDAS data set. Our experiments demonstrate the performance of representative state-of-the-art algorithms whose outcomes can serve as baselines for further studies. The dataset is publicly available at https://doi.org/10.14459/2022mp1657312 (Hu et al., 2022a) and the code (including the pre-trained models) at https://doi.org/10.5281/zenodo.7428215 (Hu et al., 2022b)

    Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability

    Full text link
    Image fusion combines data from different heterogeneous sources to obtain more precise information about an underlying scene. Hyperspectral-multispectral (HS-MS) image fusion is currently attracting great interest in remote sensing since it allows the generation of high spatial resolution HS images, circumventing the main limitation of this imaging modality. Existing HS-MS fusion algorithms, however, neglect the spectral variability often existing between images acquired at different time instants. This time difference causes variations in spectral signatures of the underlying constituent materials due to different acquisition and seasonal conditions. This paper introduces a novel HS-MS image fusion strategy that combines an unmixing-based formulation with an explicit parametric model for typical spectral variability between the two images. Simulations with synthetic and real data show that the proposed strategy leads to a significant performance improvement under spectral variability and state-of-the-art performance otherwise
    • …
    corecore