19 research outputs found

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Dynamical Hyperspectral Unmixing with Variational Recurrent Neural Networks

    Full text link
    Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences. It reveals the dynamical evolution of the materials (endmembers) and of their proportions (abundances) in a given scene. However, adequately accounting for the spatial and temporal variability of the endmembers in MTHU is challenging, and has not been fully addressed so far in unsupervised frameworks. In this work, we propose an unsupervised MTHU algorithm based on variational recurrent neural networks. First, a stochastic model is proposed to represent both the dynamical evolution of the endmembers and their abundances, as well as the mixing process. Moreover, a new model based on a low-dimensional parametrization is used to represent spatial and temporal endmember variability, significantly reducing the amount of variables to be estimated. We propose to formulate MTHU as a Bayesian inference problem. However, the solution to this problem does not have an analytical solution due to the nonlinearity and non-Gaussianity of the model. Thus, we propose a solution based on deep variational inference, in which the posterior distribution of the estimated abundances and endmembers is represented by using a combination of recurrent neural networks and a physically motivated model. The parameters of the model are learned using stochastic backpropagation. Experimental results show that the proposed method outperforms state of the art MTHU algorithms

    Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing

    Get PDF
    Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    MINVO Basis: Finding Simplexes with Minimum Volume Enclosing Polynomial Curves

    Full text link
    This paper studies the problem of finding the smallest nn-simplex enclosing a given nthn^{\text{th}}-degree polynomial curve. Although the Bernstein and B-Spline polynomial bases provide feasible solutions to this problem, the simplexes obtained by these bases are not the smallest possible, which leads to undesirably conservative results in many applications. We first prove that the polynomial basis that solves this problem (MINVO basis) also solves for the nthn^\text{th}-degree polynomial curve with largest convex hull enclosed in a given nn-simplex. Then, we present a formulation that is \emph{independent} of the nn-simplex or nthn^{\text{th}}-degree polynomial curve given. By using Sum-Of-Squares (SOS) programming, branch and bound, and moment relaxations, we obtain high-quality feasible solutions for any nNn\in\mathbb{N} and prove numerical global optimality for n=1,2,3n=1,2,3. The results obtained for n=3n=3 show that, for any given 3rd3^{\text{rd}}-degree polynomial curve, the MINVO basis is able to obtain an enclosing simplex whose volume is 2.362.36 and 254.9254.9 times smaller than the ones obtained by the Bernstein and B-Spline bases, respectively. When n=7n=7, these ratios increase to 902.7902.7 and 2.99710212.997\cdot10^{21}, respectively.Comment: 25 pages, 16 figure

    Random access spectral imaging

    Get PDF
    A salient goal of spectral imaging is to record a so-called hyperspectral data-cube, consisting of two spatial and one spectral dimension. Traditional approaches are based on either time-sequential scanning in either the spatial or spectral dimension: spatial scanning involves passing a fixed aperture over a scene in the manner of a raster scan and spectral scanning is generally based on the use of a tuneable filter, where typically a series of narrow-band images of a fixed field of view are recorded and assembled into the data-cube. Such techniques are suitable only when the scene in question is static or changes slower than the scan rate. When considering dynamic scenes a time-resolved (snapshot) spectral imaging technique is required. Such techniques acquire the whole data-cube in a single measurement, but require a trade-off in spatial and spectral resolution. These trade-offs prevent current snapshot spectral imaging techniques from achieving resolutions on par with time-sequential techniques. Any snapshot device needs to have an optical architecture that allows it to gather light from the scene and map it to the detector in a way that allows the spatial and spectral components can be de-multiplexed to reconstruct the data-cube. This process results in the decreased resolution of snapshot devices as it becomes a problem of mapping a 3D data-cube onto a 2D detector. The sheer volume of data present in the data-cube also presents a processing challenge, particularly in the case of real-time processing. This thesis describes a prototype snapshot spectral imaging device that employs a random-spatial-access technique to record spectra only from the regions of interest in the scene, thus enabling maximisation of integration time and minimisation of data volume and recording rate. The aim of this prototype is to demonstrate how a particular optical architecture will allow for the effect of some of the above mentioned bottlenecks to be removed. Underpinning the basic concept is the fact that in all practical scenes most of the spectrally interesting information is contained in relatively few pixels. The prototype system uses random-spatial-access to multiple points in the scene considered to be of greatest interest. This enables time-resolved high resolution spectrometry to be made simultaneously at points across the full field of view. The enabling technology for the prototype was a digital micromirror device (DMD), which is an array of switchable mirrors that was used to create a two channel system. One channel was to a conventional imaging camera, while the other was to a spectrometer. The DMD acted as a dynamic aperture to the spectrometer and could be used to open and close slits in any part of the spectrometer aperture. The imaging channel was used to guide the selection of points of interest from the scene. An extensive geometric calibration was performed to determine the relationships between the DMD and two channels of the system. Two demonstrations of the prototype are given in this thesis: a dynamic biological scene and a static scene sampled using statistical sampling methods enabled by the dynamic aperture of the system. The dynamic scene consisted of red blood cells in motion and also undergoing a process of de-oxygenation which resulted in a change in the spectrum. Ten red blood cells were tracked across the scene and the expected change in spectrum was observed. For the second example the prototype was modified for Raman spectroscopy by adding laser illumination, a mineral sample was scanned and used to test statistical sampling methods. These methods exploited the re-configurable aperture of the system to sample the scene using blind random sampling and a grid based sampling approach. Other spectral imaging systems have a fixed aperture and cannot operate such sampling schemes
    corecore