46 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Hyperspectral Image Analysis of Food Quality

    Get PDF

    Parallel Nonnegative Matrix Factorization Algorithms for Hyperspectral Images

    Get PDF
    Hyperspectral imaging is a branch of remote sensing which deals with creating and processing aerial or satellite pictures that capture wide range of wavelengths, most of which are invisible to the naked eye. Hyperspectral images are composed of many bands, each corresponding to certain light frequencies. Because of their complex nature, image processing tasks such as feature extraction can be resource and time consuming. There are many unsupervised extraction methods available. A recently investigated one is Nonnegative Matrix Factorization (NMF), a method that given positive linear matrix of positive sources, attempts to recover them. In this thesis we designed, implemented and tested parallel versions of two popular iterative NMF algorithms: one based on multiplicative updates, and another on alternative gradient computation. Our algorithms are designed to leverage the multi-processor SMP architecture and power of threading to evenly distribute the workload among the available CPU’s and improve the performance as compared to their sequential counterparts. This work could be used as a basis for creating even more powerful distributed algorithms that would work on clustered architectures. The experiments show a speedup in both algorithms without reduction in accuracy. In addition, we have also developed a java based framework offering reading and writing tools for various hyperspectral image types, as well as visualization tools, and a graphical user interface to launch and control the factorization processes

    Tensor-based Hyperspectral Image Processing Methodology and its Applications in Impervious Surface and Land Cover Mapping

    Get PDF
    The emergence of hyperspectral imaging provides a new perspective for Earth observation, in addition to previously available orthophoto and multispectral imagery. This thesis focused on both the new data and new methodology in the field of hyperspectral imaging. First, the application of the future hyperspectral satellite EnMAP in impervious surface area (ISA) mapping was studied. During the search for the appropriate ISA mapping procedure for the new data, the subpixel classification based on nonnegative matrix factorization (NMF) achieved the best success. The simulated EnMAP image shows great potential in urban ISA mapping with over 85% accuracy. Unfortunately, the NMF based on the linear algebra only considers the spectral information and neglects the spatial information in the original image. The recent wide interest of applying the multilinear algebra in computer vision sheds light on this problem and raised the idea of nonnegative tensor factorization (NTF). This thesis found that the NTF has more advantages over the NMF when work with medium- rather than the high-spatial-resolution hyperspectral image. Furthermore, this thesis proposed to equip the NTF-based subpixel classification methods with the variations adopted from the NMF. By adopting the variations from the NMF, the urban ISA mapping results from the NTF were improved by ~2%. Lastly, the problem known as the curse of dimensionality is an obstacle in hyperspectral image applications. The majority of current dimension reduction (DR) methods are restricted to using only the spectral information, when the spatial information is neglected. To overcome this defect, two spectral-spatial methods: patch-based and tensor-patch-based, were thoroughly studied and compared in this thesis. To date, the popularity of the two solutions remains in computer vision studies and their applications in hyperspectral DR are limited. The patch-based and tensor-patch-based variations greatly improved the quality of dimension-reduced hyperspectral images, which then improved the land cover mapping results from them. In addition, this thesis proposed to use an improved method to produce an important intermediate result in the patch-based and tensor-patch-based DR process, which further improved the land cover mapping results

    Hyperspectral Image Analysis through Unsupervised Deep Learning

    Get PDF
    Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la gĂ©ologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minĂ©rale, la cartographie, ainsi que l’estimation de la portĂ©e. Le plus souvent, ces acquisitions sont rĂ©alisĂ©es in-situ soit Ă  l’aide de capteurs aĂ©roportĂ©s, soit Ă  l’aide de dispositifs portatifs. La dĂ©couverte de minĂ©raux indicateurs a permis d’amĂ©liorer grandement l’exploration minĂ©rale. Ceci est en partie dĂ» Ă  l’utilisation d’instruments portatifs. Dans ce contexte le dĂ©veloppement de systĂšmes automatisĂ©s permettrait d’augmenter Ă  la fois la qualitĂ© de l’exploration et la prĂ©cision de la dĂ©tection des indicateurs. C’est dans ce cadre que s’inscrit le travail menĂ© dans ce doctorat. Le sujet consistait en l’utilisation de mĂ©thodes d’apprentissage automatique appliquĂ©es Ă  l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherchĂ© Ă©tant l’identification de grains minĂ©raux de petites tailles utilisĂ©s comme indicateurs minĂ©ral -ogiques. Une application potentielle de cette recherche serait le dĂ©veloppement d’un outil logiciel d’assistance pour l’analyse des Ă©chantillons lors de l’exploration minĂ©rale. Les expĂ©riences ont Ă©tĂ© menĂ©es en laboratoire dans la gamme relative Ă  l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m Ă  11.8 m. Ces essais ont permis de proposer une mĂ©thode pour calculer l’annulation du continuum. La mĂ©thode utilisĂ©e lors de ces essais utilise la factorisation matricielle non nĂ©gative (NMF). En utlisant une factorisation du premier ordre on peut dĂ©duire le rayonnement de pĂ©nĂ©tration, lequel peut ensuite ĂȘtre comparĂ© et analysĂ© par rapport Ă  d’autres mĂ©thodes plus communes. L’analyse des rĂ©sultats spectraux en comparaison avec plusieurs bibliothĂšques existantes de donnĂ©es a permis de mettre en Ă©vidence la suppression du continuum. Les expĂ©rience ayant menĂ©s Ă  ce rĂ©sultat ont Ă©tĂ© conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de diffĂ©rents matĂ©riaux tels que la pyrope, l’olivine et le quartz a commencĂ©. Lors d’une phase de comparaison entre des approches supervisĂ©es et non supervisĂ©es, cette derniĂšre s’est montrĂ©e plus appropriĂ© en raison du comportement indĂ©pendant par rapport Ă  l’étape d’entraĂźnement. Afin de confirmer la qualitĂ© de ces rĂ©sultats quatre expĂ©riences ont Ă©tĂ© menĂ©es. Lors d’une premiĂšre expĂ©rience deux algorithmes ont Ă©tĂ© Ă©valuĂ©s pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacitĂ© significativement accrue concernant l’identification en comparaison des rĂ©sultats de la littĂ©rature. Cependant des essais effectuĂ©s sur des donnĂ©es LWIR ont montrĂ© un manque de prĂ©diction de la surface du grain lorsque les grains Ă©taient irrĂ©guliers avec prĂ©sence d’agrĂ©gats minĂ©raux. La seconde expĂ©rience a consistĂ©, en une analyse quantitaive comparative entre deux bases de donnĂ©es de Ground Truth (GT), nommĂ©e rigid-GT et observed-GT (rigide-GT: Ă©tiquet manuel de la rĂ©gion, observĂ©e-GT:Ă©tiquetage manuel les pixels). La prĂ©cision des rĂ©sultats Ă©tait 1.5 fois meilleur lorsque l’on a utlisĂ© la base de donnĂ©es observed-GT que rigid-GT. Pour les deux derniĂšres epxĂ©rience, des donnĂ©es venant d’un MEB (Microscope Électronique Ă  Balayage) ainsi que d’un microscopie Ă  fluorescence (XRF) ont Ă©tĂ© ajoutĂ©es. Ces donnĂ©es ont permis d’introduire des informations relatives tant aux agrĂ©gats minĂ©raux qu’à la surface des grains. Les rĂ©sultats ont Ă©tĂ© comparĂ©s par des techniques d’identification automatique des minĂ©raux, utilisant ArcGIS. Cette derniĂšre a montrĂ© une performance prometteuse quand Ă  l’identification automatique et Ă  aussi Ă©tĂ© utilisĂ©e pour la GT de validation. Dans l’ensemble, les quatre mĂ©thodes de cette thĂšse reprĂ©sentent des mĂ©thodologies bĂ©nĂ©fiques pour l’identification des minĂ©raux. Ces mĂ©thodes prĂ©sentent l’avantage d’ĂȘtre non-destructives, relativement prĂ©cises et d’avoir un faible coĂ»t en temps calcul ce qui pourrait les qualifier pour ĂȘtre utilisĂ©e dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7ÎŒm to 11.8ÎŒm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Nonparametric Detection of Nonlinearly Mixed Pixels and Endmember Estimation in Hyperspectral Images

    Get PDF
    International audienceMixing phenomena in hyperspectral images depend on a variety of factors, such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that the mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images

    Multivariate unmixing approaches on Raman images of plant cell walls: new insights or overinterpretation of results?

    Get PDF
    Background: Plant cell walls are nanocomposites based on cellulose microfibrils embedded in a matrix of polysaccharides and aromatic polymers. They are optimized for different functions (e.g. mechanical stability) by changing cell form, cell wall thickness and composition. To reveal the composition of plant tissues in a non-destructive way on the microscale, Raman imaging has become an important tool. Thousands of Raman spectra are acquired, each one being a spatially resolved molecular fingerprint of the plant cell wall. Nevertheless, due to the multicomponent nature of plant cell walls, many bands are overlapping and classical band integration approaches often not suitable for imaging. Multivariate data analysing approaches have a high potential as the whole wavenumber region of all thousands of spectra is analysed at once. Results: Three multivariate unmixing algorithms, vertex component analysis, non-negative matrix factorization and multivariate curve resolution-alternating least squares were applied to find the purest components within datasets acquired from micro-sections of spruce wood and Arabidopsis. With all three approaches different cell wall layers (including tiny S1 and S3 with 0.09-0.14 ÎŒm thickness) and cell contents were distinguished and endmember spectra with a good signal to noise ratio extracted. Baseline correction influences the results obtained in all methods as well as the way in which algorithm extracts components, i.e. prioritizing the extraction of positive endmembers by sequential orthogonal projections in VCA or performing a simultaneous extraction of non-negative components aiming at explaining the maximum variance in NMF and MCR-ALS. Other constraints applied (e.g. closure in VCA) or a previous principal component analysis filtering step in MCR-ALS also contribute to the differences obtained. Conclusions: VCA is recommended as a good preliminary approach, since it is fast, does not require setting many input parameters and the endmember spectra result in good approximations of the raw data. Yet the endmember spectra are more correlated and mixed than those retrieved by NMF and MCR-ALS methods. The latter two give the best model statistics (with lower lack of fit in the models), but care has to be taken about overestimating the rank as it can lead to artificial shapes due to peak splitting or inverted bands
    corecore