219 research outputs found

    Spectral Superresolution of Multispectral Imagery with Joint Sparse and Low-Rank Learning

    Full text link
    Extensive attention has been widely paid to enhance the spatial resolution of hyperspectral (HS) images with the aid of multispectral (MS) images in remote sensing. However, the ability in the fusion of HS and MS images remains to be improved, particularly in large-scale scenes, due to the limited acquisition of HS images. Alternatively, we super-resolve MS images in the spectral domain by the means of partially overlapped HS images, yielding a novel and promising topic: spectral superresolution (SSR) of MS imagery. This is challenging and less investigated task due to its high ill-posedness in inverse imaging. To this end, we develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS-MS dictionary pairs from overlapped regions. J-SLoL infers and recovers the unknown hyperspectral signals over a larger coverage by sparse coding on the learned dictionary pair. Furthermore, we validate the SSR performance on three HS-MS datasets (two for classification and one for unmixing) in terms of reconstruction, classification, and unmixing by comparing with several existing state-of-the-art baselines, showing the effectiveness and superiority of the proposed J-SLoL algorithm. Furthermore, the codes and datasets will be available at: https://github.com/danfenghong/IEEE\_TGRS\_J-SLoL, contributing to the RS community

    Improvements to algorithms for hyperspectral linear unmixing based on statistical model

    Get PDF
    Spectral mixing is one of the main problems that arise when characterizing the spectral constituents residing at a sub-pixel level in a hyperspectral scene. In this work we propose a improvement of the algorithms based on statistical model, i.e. NCM, with a novel sampling approach inspired by Genetic Algorithms. Furthermore, linearization is introduced to reduce computational complexity

    Hyperspectral Methods of Determining Grit Application Density on Sandpaper

    Get PDF
    A low cost real time method of determining the density of grit applied to sandpaper does not currently exist. This thesis will explore three methods for determining grit density from digital image data. A means to characterize the application in terms of frequency by using direct cosine transform basis images will be explored. An RX detector algorithm to characterize the image background will be tested. A linear unmixing methodology will be developed that characterizes the proportion of glue and grit present in each hyperspectral pixel vector

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Hyperspectral Image Analysis through Unsupervised Deep Learning

    Get PDF
    Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results

    Nonlinear hyperspectral unmixing: strategies for nonlinear mixture detection, endmember estimation and band-selection

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2016.Abstract : Mixing phenomena in hyperspectral images depend on a variety of factors such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that mixing phenomena can also be nonlinear. Kernel-based nonlinear mixing models have been applied to unmix spectral information of hyperspectral images when the type of mixing occurring in the scene is too complex or unknown. However, the corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to search for different strategies to produce simpler and/or more accurate results. In this thesis, we tackle three distinct parts of the complete spectral unmixing (SU) problem. First, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection test statistics for which a probability density function can be reasonably approximated. Second, we propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images. Finally, we propose two methods for band selection (BS) in the reproducing kernel Hilbert space (RKHS), which lead to a significant reduction of the processing time required by nonlinear unmixing techniques. The first method employs the kernel k-means (KKM) algorithm to find clusters in the RKHS. Each cluster centroid is then associated to the closest mapped spectral vector. The second method is centralized, and it is based upon the coherence criterion, which sets the largest value allowed for correlations between the basis kernel functions characterizing the unmixing model. We show that the proposed BS approach is equivalent to solving a maximum clique problem (MCP), that is, to searching for the largest complete subgraph in a graph. Furthermore, we devise a strategy for selecting the coherence threshold and the Gaussian kernel bandwidth using coherence bounds for linearly independent bases. Simulation results illustrate the efficiency of the proposed method.Imagem hiperespectral (HI) é uma imagem em que cada pixel contém centenas (ou até milhares) de bandas estreitas e contíguas amostradas num amplo domínio do espectro eletromagnético. Sensores hiperespectrais normalmente trocam resolução espacial por resolução espectral devido principalmente a fatores como a distância entre o instrumento e a cena alvo, e limitada capacidade de processamento, transmissão e armazenamento históricas, mas que se tornam cada vez menos problemáticas. Este tipo de imagem encontra ampla utilização em uma gama de aplicações em astronomia, agricultura, imagens biomédicas, geociências, física, vigilância e sensoriamento remoto. A usual baixa resolução espacial de sensores espectrais implica que o que se observa em cada pixel é normalmente uma mistura das assinaturas espectrais dos materiais presentes na cena correspondente (normalmente denominados de endmembers). Assim um pixel em uma imagem hiperespectral não pode mais ser determinado por um tom ou cor mas sim por uma assinatura espectral do material, ou materiais, que se encontram na região analisada. O modelo mais simples e amplamente utilizado em aplicações com imagens hiperespectrais é o modelo linear, no qual o pixel observado é modelado como uma combinação linear dos endmembers. No entanto, fortes evidências de múltiplas reflexões da radiação solar e/ou materiais intimamente misturados, i.e., misturados em nível microscópico, resultam em diversos modelos não-lineares dos quais destacam-se os modelos bilineares, modelos de pós não-linearidade, modelos de mistura íntima e modelos não-paramétricos. Define-se então o problema de desmistura espectral (ou em inglês spectral unmixing - SU), que consiste em determinar as assinaturas espectrais dos endmembers puros presentes em uma cena e suas proporções (denominadas de abundâncias) para cada pixel da imagem. SU é um problema inverso e por natureza cego uma vez que raramente estão disponíveis informações confiáveis sobre o número de endmembers, suas assinaturas espectrais e suas distribuições em uma dada cena. Este problema possui forte conexão com o problema de separação cega de fontes mas difere no fato de que no problema de SU a independência de fontes não pode ser considerada já que as abundâncias são de fato proporções e por isso dependentes (abundâncias são positivas e devem somar 1). A determinação dos endmembers é conhecida como extração de endmembers e a literatura apresenta uma gama de algoritmos com esse propósito. Esses algoritmos normalmente exploram a geometria convexa resultante do modelo linear e da restrições sobre as abundâncias. Quando os endmembers são considerados conhecidos, ou estimados em um passo anterior, o problema de SU torna-se um problema supervisionado, com pares de entrada (endmembers) e saída (pixels), reduzindo-se a uma etapa de inversão, ou regressão, para determinar as proporções dos endmembers em cada pixel. Quando modelos não-lineares são considerados, a literatura apresenta diversas técnicas que podem ser empregadas dependendo da disponibilidade de informações sobre os endmembers e sobre os modelos que regem a interação entre a luz e os materiais numa dada cena. No entanto, informações sobre o tipo de mistura presente em cenas reais são raramente disponíveis. Nesse contexto, métodos kernelizados, que assumem modelos não-paramétricos, têm sido especialmente bem sucedidos quando aplicados ao problema de SU. Dentre esses métodos destaca-se o SK-Hype, que emprega a teoria de mínimos quadrados-máquinas de vetores de suporte (LS-SVM), numa abordagem que considera um modelo linear com uma flutuação não-linear representada por uma função pertencente a um espaço de Hilbert de kernel reprodutivos (RKHS). Nesta tese de doutoramento diferentes problemas foram abordados dentro do processo de SU de imagens hiperespectrais não-lineares como um todo. Contribuições foram dadas para a detecção de misturas não-lineares, estimação de endmembers quando uma parte considerável da imagem possui misturas não-lineares, e seleção de bandas no espaço de Hilbert de kernels reprodutivos (RKHS). Todos os métodos foram testados através de simulações com dados sintéticos e reais, e considerando unmixing supervisionado e não-supervisionado. No Capítulo 4, um método semi-paramétrico de detecção de misturas não-lineares é apresentado para imagens hiperespectrais. Esse detector compara a performance de dois modelos: um linear paramétrico, usando mínimos-quadrados (LS), e um não-linear não-paramétrico usando processos Gaussianos. A idéia da utilização de modelos não-paramétricos se conecta com o fato de que na prática pouco se sabe sobre a real natureza da não-linearidade presente na cena. Os erros de ajuste desses modelos são então comparados em uma estatística de teste para a qual é possível aproximar a distribuição na hipótese de misturas lineares e, assim, estimar um limiar de detecção para uma dada probabilidade de falso-alarme. A performance do detector proposto foi estudada considerando problemas supervisionados e não-supervisionados, sendo mostrado que a melhoria obtida no desempenho SU utilizando o detector proposto é estatisticamente consistente. Além disso, um grau de não-linearidade baseado nas energias relativas das contribuições lineares e não-lineares do processo de mistura foi definido para quantificar a importância das parcelas linear e não-linear dos modelos. Tal definição é importante para uma correta avaliação dos desempenhos relativos de diferentes estratégias de detecção de misturas não-lineares. No Capítulo 5 um algoritmo iterativo foi proposto para a estimação de endmembers como uma etapa de pré-processamento para problemas SU não supervisionados. Esse algoritmo intercala etapas de detecção de misturas não-lineares e estimação de endmembers de forma iterativa, na qual uma etapa de estimação de endmembers é seguida por uma etapa de detecção, na qual uma parcela dos pixels mais não-lineares é descartada. Esse processo é repetido por um número máximo de execuções ou até um critério de parada ser atingido. Demonstra-se que o uso combinado do detector proposto com um algoritmo de estimação de endmembers leva a melhores resultados de SU quando comparado com soluções do estado da arte. Simulações utilizando diferentes cenários corroboram as conclusões. No Capítulo 6 dois métodos para SU não-linear de imagens hiperespectrais, que empregam seleção de bandas (BS) diretamente no espaço de Hilbert de kernels reprodutivos (RKHS), são apresentados. O primeiro método utiliza o algoritmo Kernel K-Means (KKM) para encontrar clusters diretamente no RKHS onde cada centroide é então associada ao vetor espectral mais próximo. O segundo método é centralizado e baseado no critério de coerência, que incorpora uma medida da qualidade do dicionário no RKHS para a SU não-linear. Essa abordagem centralizada é equivalente a resolver um problema de máximo clique (MCP). Contrariamente a outros métodos concorrentes que não incluem uma escolha eficiente dos parâmetros do modelo, o método proposto requer apenas uma estimativa inicial do número de bandas selecionadas. Os resultados das simulações empregando dados, tanto sintéticos como reais, ilustram a qualidade dos resultados de unmixing obtidos com os métodos de BS propostos. Ao utilizar o SK-Hype, para um número reduzido de bandas, são obtidas estimativas de abundância tão precisas quanto aquelas obtidas utilizando o método SK-Hype com todo o espectro disponível, mas com uma pequena fração do custo computacional

    Caracterização e estudo comparativo de exsudações de hidrocarbonetos e plays petrolíferos em bacias terrestres das regiões central do Irã e sudeste do Brasil usando sensoriamento remoto espectral

    Get PDF
    Orientador: Carlos Roberto de Souza FilhoTese (doutorado) - Universidade Estadual de Campinas, Instituto de GeociênciasResumo: O objetivo desta pesquisa foi explorar as assinaturas de exsudações de hidrocarbonetos na superfície usando a tecnologia de detecção remota espectral. Isso foi alcançado primeiro, realizando uma revisão abrangente das capacidades e potenciais técnicas de detecção direta e indireta. Em seguida, a técnica foi aplicada para investigar dois locais de teste localizados no Irã e no Brasil, conhecidos por hospedar sistemas ativos de micro-exsudações e afloramentos betuminosos, respectivamente. A primeira área de estudo está localizada perto da cidade de Qom (Irã), e está inserida no campo petrolífero Alborz, enterrado sob sedimentos datados do Oligoceno da Formação Upper Red. O segundo local está localizado perto da cidade de Anhembi (SP), na margem oriental da bacia do Paraná, no Brasil, e inclui acumulações de betume em arenitos triássicos da Formação Pirambóia. O trabalho na área de Qom integrou evidências de (i) estudos petrográficos e geoquímicos em laboratório, (ii) investigações de afloramentos em campo, e (iii) mapeamento de anomalia em larga escala através de conjuntos de dados multi-espectrais ASTER e Sentinel-2. O resultado deste estudo se trata de novos indicadores mineralógicos e geoquímicos para a exploração de micro-exsudações e um modelo de micro-exsudações atualizado. Durante este trabalho, conseguimos desenvolver novas metodologias para análise de dados espectroscópicos. Através da utilização de dados simulados, indicamos que o instrumento de satélite WorldView-3 tem potencial para detecção direta de hidrocarbonetos. Na sequência do estudo, dados reais sobre afloramentos de arenitos e óleo na área de Anhembi foram investigados. A área foi fotografada novamente no chão e usando o sistema de imagem hiperespectral AisaFENIX. Seguiu-se estudos e amostragem no campo,incluindo espectroscopia de alcance fechado das amostras no laboratório usando instrumentos de imagem (ou seja, sisuCHEMA) e não-imagem (ou seja, FieldSpec-4). O estudo demonstrou que uma abordagem espectroscópica multi-escala poderia fornecer uma imagem completa das variações no conteúdo e composição do betume e minerais de alteração que acompanham. A assinatura de hidrocarbonetos, especialmente a centrada em 2300 nm, mostrou-se consistente e comparável entre as escalas e capaz de estimar o teor de betume de areias de petróleo em todas as escalas de imagemAbstract: The objective of this research was to explore for the signatures of seeping hydrocarbons on the surface using spectral remote sensing technology. It was achieved firstly by conducting a comprehensive review of the capacities and potentials of the technique for direct and indirect seepage detection. Next, the technique was applied to investigate two distinctive test sites located in Iran and Brazil known to retain active microseepage systems and bituminous outcrops, respectively. The first study area is located near the city of Qom in Iran, and consists of Alborz oilfield buried under Oligocene sediments of the Upper-Red Formation. The second site is located near the town of Anhembi on the eastern edge of the Paraná Basin in Brazil and includes bitumen accumulations in the Triassic sandstones of the Pirambóia Formation. Our work in Qom area integrated evidence from (i) petrographic, spectroscopic, and geochemical studies in the laboratory, (ii) outcrop investigations in the field, and (iii) broad-scale anomaly mapping via orbital remote sensing data. The outcomes of this study was novel mineralogical and geochemical indicators for microseepage characterization and a classification scheme for the microseepage-induced alterations. Our study indicated that active microseepage systems occur in large parts of the lithofacies in Qom area, implying that the extent of the petroleum reservoir is much larger than previously thought. During this work, we also developed new methodologies for spectroscopic data analysis and processing. On the other side, by using simulated data, we indicated that WorldView-3 satellite instrument has the potential for direct hydrocarbon detection. Following this demonstration, real datasets were acquired over oil-sand outcrops of the Anhembi area. The area was further imaged on the ground and from the air by using an AisaFENIX hyperspectral imaging system. This was followed by outcrop studies and sampling in the field and close-range spectroscopy in the laboratory using both imaging (i.e. sisuCHEMA) and nonimaging instruments. The study demonstrated that a multi-scale spectroscopic approach could provide a complete picture of the variations in the content and composition of bitumen and associated alteration mineralogy. The oil signature, especially the one centered at 2300 nm, was shown to be consistent and comparable among scales, and capable of estimating the bitumen content of oil-sands at all imaging scalesDoutoradoGeologia e Recursos NaturaisDoutor em Geociências2015/06663-7FAPES

    Ensemble classifiers for land cover mapping

    Get PDF
    This study presents experimental investigations on supervised ensemble classification for land cover classification. Despite the arrays of classifiers available in machine learning to create an ensemble, knowing and understanding the correct classifier to use for a particular dataset remains a major challenge. The ensemble method increases classification accuracy by consulting experts taking final decision in the classification process. This study generated various land cover maps, using image classification. This is to authenticate the number of classifiers that should be used for creating an ensemble. The study exploits feature selection techniques to create diversity in ensemble classification. Landsat imagery of Kampala (the capital of Uganda, East Africa), AVIRIS hyperspectral dataset of Indian pine of Indiana and Support Vector Machines were used to carry out the investigation. The research reveals that the superiority of different classification approaches employed depends on the datasets used. In addition, the pre-processing stage and the strategy used during the designing phase of each classifier is very essential. The results obtained from the experiments conducted showed that, there is no significant benefit in using many base classifiers for decision making in ensemble classification. The research outcome also reveals how to design better ensemble using feature selection approach for land cover mapping. The study also reports the experimental comparison of generalized support vector machines, random forests, C4.5, neural network and bagging classifiers for land cover classification of hyperspectral images. These classifiers are among the state-of-the-art supervised machine learning methods for solving complex pattern recognition problems. The pixel purity index was used to obtain the endmembers from the Indiana pine and Washington DC mall hyperspectral image datasets. Generalized reduced gradient optimization algorithm was used to estimate fractional abundance in the image dataset thereafter obtained numeric values for land cover classification. The fractional abundance of each pixel was obtained using the spectral signature values of the endmembers and pixel values of class labels. As the results of the experiments, the classifiers show promising results. Using Indiana pine and Washington DC mall hyperspectral datasets, experimental comparison of all the classifiers’ performances reveals that random forests outperforms the other classifiers and it is computational effective. The study makes a positive contribution to the problem of classifying land cover hyperspectral images by exploring the use of generalized reduced gradient method and five supervised classifiers. The accuracy comparison of these classifiers is valuable for decision makers to consider tradeoffs in method accuracy versus complexity. The results from the research has attracted nine publications which include, six international and one local conference papers, one published in Computing Research Repository (CoRR), one Journal paper submitted and one Springer book chapter, Abe et al., 2012 obtained a merit award based on the reviewer reports and the score reports of the conference committee members during the conference period

    Human retinal oximetry using hyperspectral imaging

    Get PDF
    The aim of the work reported in this thesis was to investigate the possibility of measuring human retinal oxygen saturation using hyperspectral imaging. A direct non-invasive quantitative mapping of retinal oxygen saturation is enabled by hyperspectral imaging whereby the absorption spectra of oxygenated and deoxygenated haemoglobin are recorded and analysed. Implementation of spectral retinal imaging thus requires ophthalmic instrumentation capable of efficiently recording the requisite spectral data cube. For this purpose, a spectral retinal imager was developed for the first time by integrating a liquid crystal tuneable filter into the illumination system of a conventional fundus camera to enable the recording of narrow-band spectral images in time sequence from 400nm to 700nm. Postprocessing algorithms were developed to enable accurate exploitation of spectral retinal images and overcome the confounding problems associated with this technique due to the erratic eye motion and illumination variation. Several algorithms were developed to provide semi-quantitative and quantitative oxygen saturation measurements. Accurate quantitative measurements necessitated an optical model of light propagation into the retina that takes into account the absorption and scattering of light by red blood cells. To validate the oxygen saturation measurements and algorithms, a model eye was constructed and measurements were compared with gold-standard measurements obtained by a Co-Oximeter. The accuracy of the oxygen saturation measurements was (3.31%± 2.19) for oxygenated blood samples. Clinical trials from healthy and diseased subjects were analysed and oxygen saturation measurements were compared to establish a merit of certain retinal diseases. Oxygen saturation measurements were in agreement with clinician expectations in both veins (48%±9) and arteries (96%±5). We also present in this thesis the development of novel clinical instrument based on IRIS to perform retinal oximetry.Al-baath University, Syri
    • …
    corecore