50 research outputs found

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Nonparametric Detection of Nonlinearly Mixed Pixels and Endmember Estimation in Hyperspectral Images

    Get PDF
    International audienceMixing phenomena in hyperspectral images depend on a variety of factors, such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that the mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images

    A Stepwise Analytical Projected Gradient Descent Search for Hyperspectral Unmixing and Its Code Vectorization

    Get PDF
    We present, in this paper, a new methodology for spectral unmixing, where a vector of fractions, corresponding to a set of endmembers (EMs), is estimated for each pixel in the image. The process first provides an initial estimate of the fraction vector, followed by an iterative procedure that converges to an optimal solution. Specifically, projected gradient descent (PGD) optimization is applied to (a variant of) the spectral angle mapper objective function, so as to significantly reduce the estimation error due to amplitude (i.e., magnitude) variations in EM spectra, caused by the illumination change effect. To improve the computational efficiency of our method over a commonly used gradient descent technique, we have analytically derived the objective function's gradient and the optimal step size (used in each iteration). To gain further improvement, we have implemented our unmixing module via code vectorization, where the entire process is ''folded'' into a single loop, and the fractions for all of the pixels are solved simultaneously. We call this new parallel scheme vectorized code PGD unmixing (VPGDU). VPGDU has the advantage of solving (simultaneously) an independent optimization problem per image pixel, exactly as other pixelwise algorithms, but significantly faster. Its performance was compared with the commonly used fully constrained least squares unmixing (FCLSU), the generalized bilinear model (GBM) method for hyperspectral unmixng, and the fast state-of-the-art methods, sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL) and collaborative SUnSAL (CLSUnSAL) based on the alternating direction method of multipliers. Considering all of the prospective EMs of a scene at each pixel (i.e., without a priori knowledge which/how many EMs are actually present in a given pixel), we demonstrate that the accuracy due to VPGDU is considerably higher than that obtained by FCLSU, GBM, SUnSAL, and CLSUnSAL under varying illumination, and is, otherwise, comparable with respect to these methods. However, while our method is significantly faster than FCLSU and GBM, it is slower than SUnSAL and CLSUnSAL by roughly an order of magnitude.Israel Science Ministry Scientific Infrastructure Research Grant Scheme, Helen Norman Asher Space Research Grant Scheme, Technion PhD Scholarship, new England fund Technion, Environmental Mapping and Monitoring of Iceland by Remote Sensing EMMIRS projectPeer Reviewe

    Development of a spectral unmixing procedure using a genetic algorithm and spectral shape

    Get PDF
    xvi, 85 leaves : ill. (chiefly col.) ; 29 cmSpectral unmixing produces spatial abundance maps of endmembers or ‘pure’ materials using sub-pixel scale decomposition. It is particularly well suited to extracting a greater portion of the rich information content in hyperspectral data in support of real-world issues such as mineral exploration, resource management, agriculture and food security, pollution detection, and climate change. However, illumination or shading effects, signature variability, and the noise are problematic. The Least Square (LS) based spectral unmixing technique such as Non-Negative Sum Less or Equal to One (NNSLO) depends on “shade” endmembers to deal with the amplitude errors. Furthermore, the LS-based method does not consider amplitude errors in abundance constraint calculations, thus, often leads to abundance errors. The Spectral Angle Constraint (SAC) reduces the amplitude errors, but the abundance errors remain because of using fully constrained condition. In this study, a Genetic Algorithm (GA) was adapted to resolve these issues using a series of iterative computations based on the Darwinian strategy of ‘survival of the fittest’ to improve the accuracy of abundance estimates. The developed GA uses a Spectral Angle Mapper (SAM) based fitness function to calculate abundances by satisfying a SAC-based weakly constrained condition. This was validated using two hyperspectral data sets: (i) a simulated hyperspectral dataset with embedded noise and illumination effects and (ii) AVIRIS data acquired over Cuprite, Nevada, USA. Results showed that the new GA-based unmixing method improved the abundance estimation accuracies and was less sensitive to illumination effects and noise compared to existing spectral unmixing methods, such as the SAC and NNSLO. In case of synthetic data, the GA increased the average index of agreement between true and estimated abundances by 19.83% and 30.10% compared to the SAC and the NNSLO, respectively. Furthermore, in case of real data, GA improved the overall accuracy by 43.1% and 9.4% compared to the SAC and NNSLO, respectively

    Nonlinear hyperspectral unmixing: strategies for nonlinear mixture detection, endmember estimation and band-selection

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2016.Abstract : Mixing phenomena in hyperspectral images depend on a variety of factors such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that mixing phenomena can also be nonlinear. Kernel-based nonlinear mixing models have been applied to unmix spectral information of hyperspectral images when the type of mixing occurring in the scene is too complex or unknown. However, the corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to search for different strategies to produce simpler and/or more accurate results. In this thesis, we tackle three distinct parts of the complete spectral unmixing (SU) problem. First, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection test statistics for which a probability density function can be reasonably approximated. Second, we propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images. Finally, we propose two methods for band selection (BS) in the reproducing kernel Hilbert space (RKHS), which lead to a significant reduction of the processing time required by nonlinear unmixing techniques. The first method employs the kernel k-means (KKM) algorithm to find clusters in the RKHS. Each cluster centroid is then associated to the closest mapped spectral vector. The second method is centralized, and it is based upon the coherence criterion, which sets the largest value allowed for correlations between the basis kernel functions characterizing the unmixing model. We show that the proposed BS approach is equivalent to solving a maximum clique problem (MCP), that is, to searching for the largest complete subgraph in a graph. Furthermore, we devise a strategy for selecting the coherence threshold and the Gaussian kernel bandwidth using coherence bounds for linearly independent bases. Simulation results illustrate the efficiency of the proposed method.Imagem hiperespectral (HI) é uma imagem em que cada pixel contém centenas (ou até milhares) de bandas estreitas e contíguas amostradas num amplo domínio do espectro eletromagnético. Sensores hiperespectrais normalmente trocam resolução espacial por resolução espectral devido principalmente a fatores como a distância entre o instrumento e a cena alvo, e limitada capacidade de processamento, transmissão e armazenamento históricas, mas que se tornam cada vez menos problemáticas. Este tipo de imagem encontra ampla utilização em uma gama de aplicações em astronomia, agricultura, imagens biomédicas, geociências, física, vigilância e sensoriamento remoto. A usual baixa resolução espacial de sensores espectrais implica que o que se observa em cada pixel é normalmente uma mistura das assinaturas espectrais dos materiais presentes na cena correspondente (normalmente denominados de endmembers). Assim um pixel em uma imagem hiperespectral não pode mais ser determinado por um tom ou cor mas sim por uma assinatura espectral do material, ou materiais, que se encontram na região analisada. O modelo mais simples e amplamente utilizado em aplicações com imagens hiperespectrais é o modelo linear, no qual o pixel observado é modelado como uma combinação linear dos endmembers. No entanto, fortes evidências de múltiplas reflexões da radiação solar e/ou materiais intimamente misturados, i.e., misturados em nível microscópico, resultam em diversos modelos não-lineares dos quais destacam-se os modelos bilineares, modelos de pós não-linearidade, modelos de mistura íntima e modelos não-paramétricos. Define-se então o problema de desmistura espectral (ou em inglês spectral unmixing - SU), que consiste em determinar as assinaturas espectrais dos endmembers puros presentes em uma cena e suas proporções (denominadas de abundâncias) para cada pixel da imagem. SU é um problema inverso e por natureza cego uma vez que raramente estão disponíveis informações confiáveis sobre o número de endmembers, suas assinaturas espectrais e suas distribuições em uma dada cena. Este problema possui forte conexão com o problema de separação cega de fontes mas difere no fato de que no problema de SU a independência de fontes não pode ser considerada já que as abundâncias são de fato proporções e por isso dependentes (abundâncias são positivas e devem somar 1). A determinação dos endmembers é conhecida como extração de endmembers e a literatura apresenta uma gama de algoritmos com esse propósito. Esses algoritmos normalmente exploram a geometria convexa resultante do modelo linear e da restrições sobre as abundâncias. Quando os endmembers são considerados conhecidos, ou estimados em um passo anterior, o problema de SU torna-se um problema supervisionado, com pares de entrada (endmembers) e saída (pixels), reduzindo-se a uma etapa de inversão, ou regressão, para determinar as proporções dos endmembers em cada pixel. Quando modelos não-lineares são considerados, a literatura apresenta diversas técnicas que podem ser empregadas dependendo da disponibilidade de informações sobre os endmembers e sobre os modelos que regem a interação entre a luz e os materiais numa dada cena. No entanto, informações sobre o tipo de mistura presente em cenas reais são raramente disponíveis. Nesse contexto, métodos kernelizados, que assumem modelos não-paramétricos, têm sido especialmente bem sucedidos quando aplicados ao problema de SU. Dentre esses métodos destaca-se o SK-Hype, que emprega a teoria de mínimos quadrados-máquinas de vetores de suporte (LS-SVM), numa abordagem que considera um modelo linear com uma flutuação não-linear representada por uma função pertencente a um espaço de Hilbert de kernel reprodutivos (RKHS). Nesta tese de doutoramento diferentes problemas foram abordados dentro do processo de SU de imagens hiperespectrais não-lineares como um todo. Contribuições foram dadas para a detecção de misturas não-lineares, estimação de endmembers quando uma parte considerável da imagem possui misturas não-lineares, e seleção de bandas no espaço de Hilbert de kernels reprodutivos (RKHS). Todos os métodos foram testados através de simulações com dados sintéticos e reais, e considerando unmixing supervisionado e não-supervisionado. No Capítulo 4, um método semi-paramétrico de detecção de misturas não-lineares é apresentado para imagens hiperespectrais. Esse detector compara a performance de dois modelos: um linear paramétrico, usando mínimos-quadrados (LS), e um não-linear não-paramétrico usando processos Gaussianos. A idéia da utilização de modelos não-paramétricos se conecta com o fato de que na prática pouco se sabe sobre a real natureza da não-linearidade presente na cena. Os erros de ajuste desses modelos são então comparados em uma estatística de teste para a qual é possível aproximar a distribuição na hipótese de misturas lineares e, assim, estimar um limiar de detecção para uma dada probabilidade de falso-alarme. A performance do detector proposto foi estudada considerando problemas supervisionados e não-supervisionados, sendo mostrado que a melhoria obtida no desempenho SU utilizando o detector proposto é estatisticamente consistente. Além disso, um grau de não-linearidade baseado nas energias relativas das contribuições lineares e não-lineares do processo de mistura foi definido para quantificar a importância das parcelas linear e não-linear dos modelos. Tal definição é importante para uma correta avaliação dos desempenhos relativos de diferentes estratégias de detecção de misturas não-lineares. No Capítulo 5 um algoritmo iterativo foi proposto para a estimação de endmembers como uma etapa de pré-processamento para problemas SU não supervisionados. Esse algoritmo intercala etapas de detecção de misturas não-lineares e estimação de endmembers de forma iterativa, na qual uma etapa de estimação de endmembers é seguida por uma etapa de detecção, na qual uma parcela dos pixels mais não-lineares é descartada. Esse processo é repetido por um número máximo de execuções ou até um critério de parada ser atingido. Demonstra-se que o uso combinado do detector proposto com um algoritmo de estimação de endmembers leva a melhores resultados de SU quando comparado com soluções do estado da arte. Simulações utilizando diferentes cenários corroboram as conclusões. No Capítulo 6 dois métodos para SU não-linear de imagens hiperespectrais, que empregam seleção de bandas (BS) diretamente no espaço de Hilbert de kernels reprodutivos (RKHS), são apresentados. O primeiro método utiliza o algoritmo Kernel K-Means (KKM) para encontrar clusters diretamente no RKHS onde cada centroide é então associada ao vetor espectral mais próximo. O segundo método é centralizado e baseado no critério de coerência, que incorpora uma medida da qualidade do dicionário no RKHS para a SU não-linear. Essa abordagem centralizada é equivalente a resolver um problema de máximo clique (MCP). Contrariamente a outros métodos concorrentes que não incluem uma escolha eficiente dos parâmetros do modelo, o método proposto requer apenas uma estimativa inicial do número de bandas selecionadas. Os resultados das simulações empregando dados, tanto sintéticos como reais, ilustram a qualidade dos resultados de unmixing obtidos com os métodos de BS propostos. Ao utilizar o SK-Hype, para um número reduzido de bandas, são obtidas estimativas de abundância tão precisas quanto aquelas obtidas utilizando o método SK-Hype com todo o espectro disponível, mas com uma pequena fração do custo computacional

    A Convex Analysis Framework for Blind Separation of Non-Negative Sources

    Full text link

    OCM 2013 - 1st International Conference on Optical Characterization of Materials: March 6th - 7th, 2013, Karlsruhe, Germany

    Get PDF
    The state of the art in optical characterization of materials is advancing rapidly. New insights into the theoretical foundations of this research field have been gained and exciting practical developments have taken place, both driven by novel applications that are constantly emerging. This book presents latest research results in the domain of Characterization of Materials by spectral characteristics of UV (240 nm) to IR (14 µm), multispectral image analysis, X-Ray, polarimetry and microscopy

    Robust hyperspectral image reconstruction for scene simulation applications

    Get PDF
    This thesis presents the development of a spectral reconstruction method for multispectral (MSI) and hyperspectral (HSI) applications through an enhanced dictionary learning and spectral unmixing methodologies. Earth observation/surveillance is largely undertaken by MSI sensing such as that given by the Landsat, WorldView, Sentinel etc, however, the practical usefulness of the MSI data set is very limited. This is mainly because of the very limited number of wave bands that can be provided by the MSI imagery. One means to remedy this major shortcoming is to extend the MSI into HSI without the need of involving expensive hardware investment. Specifically, spectral reconstruction has been one of the most critical elements in applications such as Hyperspectral scene simulation. Hyperspectral scene simulation has been an important technique particularly for defence applications. Scene simulation creates a virtual scene such that modelling of the materials in the scene can be tailored freely to allow certain parameters of the model to be studied. In the defence sector this is the most cost-effective technique to allow the vulnerability of the soldiers/vehicles to be evaluated before they are deployed to a foreign ground. The simulation of a hyperspectral scene requires the details of materials in the scene, which is normally not available. Current state-of-the-art technology is trying to make use of the MSI satellite data, and to transform it into HSI for the hyperspectral scene simulation. One way to achieve this is through a reconstruction algorithm, commonly known as spectral reconstruction, which turns the MSI into HSI using an optimisation approach. The methodology that has been adopted in this thesis is the development of a robust dictionary learning to estimate the endmember (EM) robustly. Once the EM is found the abundance of materials in the scene can be subsequently estimated through a linear unmixing approach. Conventional approaches to the material allocation of most Hyperspectral scene simulator has been using the Texture Material Mapper (TMM) algorithm, which allocates materials from a spectral library (a collection of pre-compiled endmember iii iv materials) database according to the minimum spectral Euclidean distance difference to a candidate pixel of the scene. This approach has been shown (in this work) to be highly inaccurate with large scene reconstruction error. This research attempts to use a dictionary learning technique for material allocation, solving it as an optimisation problem with the objective of: (i) to reconstruct the scene as closely as possible to the ground truth with a fraction of error as that given by the TMM method, and (ii) to learn materials which are trace (2-3 times the number of species (i.e. intrinsic dimension) in the scene) cluster to ensure all material species in the scene is included for the scene reconstruction. Furthermore, two approaches complementing the goals of the learned dictionary through a rapid orthogonal matching pursuit (r-OMP) which enhances the performance of the orthogonal matching pursuit algorithm; and secondly a semi-blind approximation of the irradiance of all pixels in the scene including those in the shaded regions, have been proposed in this work. The main result of this research is the demonstration of the effectiveness of the proposed algorithms using real data set. The SCD-SOMP has been shown capable to learn both the background and trace materials even for a dictionary with small number of atoms (≈10). Also, the KMSCD method is found to be the more versatile with overcomplete (non-orthogonal) dictionary capable to learn trace materials with high scene reconstruction accuracy (2x of accuracy enhancement over that simulated using the TMM method. Although this work has achieved an incremental improvement in spectral reconstruction, however, the need of dictionary training using hyperspectral data set in this thesis has been identified as one limitation which is needed to be removed for the future direction of research

    OCM 2013 - Optical Characterization of Materials - conference proceedings

    Get PDF
    The state of the art in optical characterization of materials is advancing rapidly. New insights into the theoretical foundations of this research field have been gained and exciting practical developments have taken place, both driven by novel applications that are constantly emerging. This book presents latest research results in the domain of Characterization of Materials by spectral characteristics of UV (240 nm) to IR (14 µm), multispectral image analysis, X-Ray, polarimetry and microscopy

    Algorithms for Fluorescence Lifetime Microscopy and Optical Coherence Tomography Data Analysis: Applications for Diagnosis of Atherosclerosis and Oral Cancer

    Get PDF
    With significant progress made in the design and instrumentation of optical imaging systems, it is now possible to perform high-resolution tissue imaging in near real-time. The prohibitively large amount of data obtained from such high-speed imaging systems precludes the possibility of manual data analysis by an expert. The paucity of algorithms for automated data analysis has been a major roadblock in both evaluating and harnessing the full potential of optical imaging modalities for diagnostic applications. This consideration forms the central theme of the research presented in this dissertation. Specifically, we investigate the potential of automated analysis of data acquired from a multimodal imaging system that combines fluorescence lifetime imaging (FLIM) with optical coherence tomography (OCT), for the diagnosis of atherosclerosis and oral cancer. FLIM is a fluorescence imaging technique that is capable of providing information about auto fluorescent tissue biomolecules. OCT on the other hand, is a structural imaging modality that exploits the intrinsic reflectivity of tissue samples to provide high resolution 3-D tomographic images. Since FLIM and OCT provide complimentary information about tissue biochemistry and structure, respectively, we hypothesize that the combined information from the multimodal system would increase the sensitivity and specificity for the diagnosis of atherosclerosis and oral cancer. The research presented in this dissertation can be divided into two main parts. The first part concerns the development and applications of algorithms for providing quantitative description of FLIM and OCT images. The quantitative FLIM and OCT features obtained in the first part of the research, are subsequently used to perform automated tissue diagnosis based on statistical classification models. The results of the research presented in this dissertation show the feasibility of using automated algorithms for FLIM and OCT data analysis for performing tissue diagnosis
    corecore