11 research outputs found

    Diseño e implementación de una cadena completa para desmezclado de imágenes hiperespectrales en tarjetas gráficas programables (GPUs)

    Get PDF
    La principal contribución del presente trabajo de tesis doctoral viene dada por la propuesta de nuevos algoritmos paralelos para desmezclado de imágenes hiperespectrales en aplicaciones de observación remota de la superficie terrestre mediante sensores aerotransportados o de tipo satélite. Dichos algoritmos se fundamentan en el problema de la mezcla, que permite expresar los píxels de una imagen hiperespectral como una combinación lineal o no lineal de elementos espectralmente puros (“endmembers”) ponderados por sus correspondientes fracciones de abundancia. Una vez descrita la base teórica del estudio, la tesis doctoral presenta una serie de nuevos algoritmos paralelos desarrollados, los cuales integran una cadena completa de desmezclado espectral o “unmixing” con las siguientes etapas: 1) estimación automática del número de “endmembers” en una imagen hiperespectral, 2) identificación automática de dichos “endmembers” en la imagen hiperespectral, y 3) estimación de la abundancia de cada “endmember” en cada píxel de la imagen. Tras presentar los nuevos algoritmos paralelos desarrollados con motivo del presente trabajo, realizamos un detallado estudio cuantitativo y comparativo de su precisión en el proceso de desmezclado y su rendimiento computacional en un conjunto de arquitecturas basadas en tarjetas tarjeta gráficas programables de NVidia (modelos Nvidia Tesla C1060 y NVidia GeForce 580 GTX). Los resultados experimentales han sido obtenidos utilizando imágenes hiperespectrales obtenidas por los sensores Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) e Hyperion de NASA en el contexto de varias aplicaciones reales de gran relevancia social, consistentes en la detección de los incendios que se propagaron en los días posteriores al atentado terrorista del World Trade Center en Nueva York o en la identificación automática de minerales en la región de Cuprite, Nevada, Estados Unidos. En dichos escenarios, los equipos de NASA y el Instituto Geológico de Estados Unidos (USGS) que participaron en las tareas de extinción y emergencia (en el caso de la imagen World Trade Center) e identificación de minerales (en el caso de la imagen de Cuprite) reconocieron que la disponibilidad de técnicas de desmezclado espectral en tiempo real hubiese facilitado las labores de los equipos que actuaron en dichas zonas, por lo que las técnicas desarrolladas se han desarrollado con el objetivo de permitir la realización de dichas tareas en el futuro. La memoria de tesis concluye con una discusión de las técnicas desarrolladas (incluyendo una serie de recomendaciones sobre su mejor uso en diferentes circunstancias), con la descripción de las principales conclusiones y líneas futuras derivadas del estudio, y con la bibliografía relacionada, tanto en la literatura general como la generada por el candidato.The main contribution of the present thesis work is given by the proposal of several new parallel algoritms for spectral mixture analysis of remotely sensed hyperspectral images obtained from airborne or satellite Earth observation platforms. These algorithms are focused on the identification of the most spectrally pure constituents of a hyperspectral image, and on the characterization of mixed pixels as linear or nonlinear combinations of endmembers weighted by their fractional abundances on a sub-pixel basis. Once the theoretical foundations of the proposed study are described, we proceed to describe in detail the new parallel algorithms developed as the main contribution of this research work, discussing the different steps followed in their development which comprise the following stages: 1) automatic identification of the number of endmembers in the hyperspectral image; 2) automatic identification of the spectral signatures of such endmembers; and 3) estimation of the fractional abundance of endmembers on a sub-pixel basis. After describing the new parallel algorithms introduced in this work, we develop a comprehensive quantitative and comparative analysis in terms of unmixing accuracy and computational performance using a set of graphics processing unit (GPU)-based architectures, including the NVidia Tesla C1060 and the NVidia GeForce 580 GTX. The experimental results reported in this work are evaluated in the context of two real applications with great societal impact: the possibility to automatically detect the thermal hot spots of the fires which spread in the World Trade Center area during the days after the terrorist attack of September 11th, 2001, and the possibility to perform real-time mapping of minerals in the Cuprite mining district of Nevada, USA, using hyperspectral data sets collected by NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRS) and the Hyperion instrument onboard Earth Observing One (EO-1) spacecraft. It is acknowledged by some of the organizations that, if high performance computing infrastructure had been available at the time of these events, the hyperspectral data would have been much more useful. The design of new techniques for this purpose may help the development of such tasks in future events. The thesis document concludes with a detailed discussion on the techniques presented herein (including processing recommendations and best practice), with the drawing of the main conclusions and hints at plausible future research, and with a detailed bibliography on the research area and on the specific contributions provided by the candidate to the scientific literature devoted to this topic

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    HIGH PERFORMANCE MODELLING AND COMPUTING IN COMPLEX MEDICAL CONDITIONS: REALISTIC CEREBELLUM SIMULATION AND REAL-TIME BRAIN CANCER DETECTION

    Get PDF
    The personalized medicine is the medicine of the future. This innovation is supported by the ongoing technological development that will be crucial in this field. Several areas in the healthcare research require performant technological systems, which elaborate huge amount of data in real-time. By exploiting the High Performance Computing technologies, scientists want to reach the goal of developing accurate diagnosis and personalized therapies. To reach these goals three main activities have to be investigated: managing a great amount of data acquisition and analysis, designing computational models to simulate the patient clinical status, and developing medical support systems to provide fast decisions during diagnosis or therapies. These three aspects are strongly supported by technological systems that could appear disconnected. However, in this new medicine, they will be in some way connected. As far as the data are concerned, today people are immersed in technology, producing a huge amount of heterogeneous data. Part of these is characterized by a great medical potential that could facilitate the delineation of the patient health condition and could be integrated in our medical record facilitating clinical decisions. To ensure this process technological systems able to organize, analyse and share these information are needed. Furthermore, they should guarantee a fast data usability. In this contest HPC and, in particular, the multicore and manycore processors, will surely have a high importance since they are capable to spread the computational workload on different cores to reduce the elaboration times. These solutions are crucial also in the computational modelling, field where several research groups aim to implement models able to realistically reproduce the human organs behaviour to develop their simulators. They are called digital twins and allow to reproduce the organ activity of a specific patient to study the disease progression or a new therapy. Patient data will be the inputs of these models which will predict her/his condition, avoiding invasive and expensive exams. The technological support that a realistic organ simulator requires is significant from the computational point of view. For this reason, devices as GPUs, FPGAs, multicore devices or even supercomputers are needed. As an example in this field, the development of a cerebellar simulator exploiting HPC will be described in the second chapter of this work. The complexity of the realistic mathematical models used will justify such a technological choice to reach reduced elaboration times. This work is within the Human Brain Project that aims to run a complete realistic simulation of the human brain. Finally, these technologies have a crucial role in the medical support system development. Most of the times during surgeries, it is very important that a support system provides a real-time answer. Moreover, the fact that this answer is the result of the elaboration of a complex mathematical problem, makes HPC system essential also in this field. If environments such as surgeries are considered, it is more plausible that the computation is performed by local desktop systems able to elaborate the data directly acquired during the surgery. The third chapter of this thesis describes the development of a brain cancer detection system, exploiting GPUs. This support system, developed as part of the HELICoiD project, performs a real-time elaboration of the brain hyperspectral images, acquired during surgery, to provide a classification map which highlights the tumor. The neurosurgeon is facilitated in the tissue resection. In this field, the GPU has been crucial to provide a real-time elaboration. Finally, it is possible to assert that in most of the fields of the personalized medicine, HPC will have a crucial role since they consist in the elaboration of a great amount of data in reduced times, aiming to provide specific diagnosis and therapies for the patient

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la géologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minérale, la cartographie, ainsi que l’estimation de la portée. Le plus souvent, ces acquisitions sont réalisées in-situ soit à l’aide de capteurs aéroportés, soit à l’aide de dispositifs portatifs. La découverte de minéraux indicateurs a permis d’améliorer grandement l’exploration minérale. Ceci est en partie dû à l’utilisation d’instruments portatifs. Dans ce contexte le développement de systèmes automatisés permettrait d’augmenter à la fois la qualité de l’exploration et la précision de la détection des indicateurs. C’est dans ce cadre que s’inscrit le travail mené dans ce doctorat. Le sujet consistait en l’utilisation de méthodes d’apprentissage automatique appliquées à l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherché étant l’identification de grains minéraux de petites tailles utilisés comme indicateurs minéral -ogiques. Une application potentielle de cette recherche serait le développement d’un outil logiciel d’assistance pour l’analyse des échantillons lors de l’exploration minérale. Les expériences ont été menées en laboratoire dans la gamme relative à l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m à 11.8 m. Ces essais ont permis de proposer une méthode pour calculer l’annulation du continuum. La méthode utilisée lors de ces essais utilise la factorisation matricielle non négative (NMF). En utlisant une factorisation du premier ordre on peut déduire le rayonnement de pénétration, lequel peut ensuite être comparé et analysé par rapport à d’autres méthodes plus communes. L’analyse des résultats spectraux en comparaison avec plusieurs bibliothèques existantes de données a permis de mettre en évidence la suppression du continuum. Les expérience ayant menés à ce résultat ont été conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de différents matériaux tels que la pyrope, l’olivine et le quartz a commencé. Lors d’une phase de comparaison entre des approches supervisées et non supervisées, cette dernière s’est montrée plus approprié en raison du comportement indépendant par rapport à l’étape d’entraînement. Afin de confirmer la qualité de ces résultats quatre expériences ont été menées. Lors d’une première expérience deux algorithmes ont été évalués pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacité significativement accrue concernant l’identification en comparaison des résultats de la littérature. Cependant des essais effectués sur des données LWIR ont montré un manque de prédiction de la surface du grain lorsque les grains étaient irréguliers avec présence d’agrégats minéraux. La seconde expérience a consisté, en une analyse quantitaive comparative entre deux bases de données de Ground Truth (GT), nommée rigid-GT et observed-GT (rigide-GT: étiquet manuel de la région, observée-GT:étiquetage manuel les pixels). La précision des résultats était 1.5 fois meilleur lorsque l’on a utlisé la base de données observed-GT que rigid-GT. Pour les deux dernières epxérience, des données venant d’un MEB (Microscope Électronique à Balayage) ainsi que d’un microscopie à fluorescence (XRF) ont été ajoutées. Ces données ont permis d’introduire des informations relatives tant aux agrégats minéraux qu’à la surface des grains. Les résultats ont été comparés par des techniques d’identification automatique des minéraux, utilisant ArcGIS. Cette dernière a montré une performance prometteuse quand à l’identification automatique et à aussi été utilisée pour la GT de validation. Dans l’ensemble, les quatre méthodes de cette thèse représentent des méthodologies bénéfiques pour l’identification des minéraux. Ces méthodes présentent l’avantage d’être non-destructives, relativement précises et d’avoir un faible coût en temps calcul ce qui pourrait les qualifier pour être utilisée dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7μm to 11.8μm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Generación de una librería RVC – CAL para la etapa de determinación de endmembers en el proceso de análisis de imágenes hiperespectrales

    Get PDF
    El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this documen

    Blind Hyperspectral Unmixing Using Autoencoders

    Get PDF
    The subject of this thesis is blind hyperspectral unmixing using deep learning based autoencoders. Two methods based on autoencoders are proposed and analyzed. Both methods seek to exploit the spatial correlations in the hyperspectral images to improve the performance. One by using multitask learning to simultaneously unmix a neighbourhood of pixels while the other by using a convolutional neural network autoencoder. This increases the consistency and robustness of the methods. In addition, a review of the various autoencoder methods in the literature is given along with a detailed discussion of different types of autoencoders. The thesis concludes by a critical comparison of eleven different autoencoder based methods. Ablation experiments are performed to answer the question of why autoencoders are so effective in blind hyperspectral unmixing, and an opinion is given on what the future in autoencoder unmixing holds.Efni þessarar ritgerðar er aðgreining fjölrásamynda (e. blind hyperspectral unmixing) með sjálfkóðurum (e. autoencoders) byggðum á djúpum lærdómi (e. deep learning). Tvær aðferðir byggðar á sjálfkóðurum eru kynntar og rannsakaðar. Báðar aðferðirnar leitast við að nýta sér rúmfræðilega fylgni rófa í fjölrásamyndum til að bæta árangur aðgreiningar. Ein aðferð með að nýta sér fjölbeitingarlærdóm (e. multitask learning) og hin með að nota sjálfkóðara útfærðan með földunartaugnaneti (e. convolutional neural network). Hvortveggja bætir samkvæmni og hæfni fjölrásagreiningarinnar. Ennfremur inniheldur ritgerðin yfirgripsmikið yfirlit yfir þær sjálfkóðaraaðferðir sem hafa verið birtar ásamt greinargóðri umræðu um mismunandi gerðir sjálfkóðara og útfærslur á þeim. í lok ritgerðarinnar er svo að finna gagnrýninn samanburð á 11 mismunandi aðferðum byggðum á sjálfkóðurum. Brottnáms (e. ablation) tilraunir eru gerðar til að svara spurningunni hvers vegna sjálfkóðarar eru svo árangursríkir í fjölrásagreiningu og stuttlega rætt um hvað framtíðin ber í skauti sér varðandi aðgreiningu fjölrásamynda með sjálfkóðurum. Megin framlag ritgerðarinnar er eftirfarandi: - Ný sjálfkóðaraaðferð, MTLAEU, sem nýtir á beinan hátt rúmfræðilega fylgni rófa í fjölrásamyndum til að bæta árangur aðgreiningar. Aðferðin notar fjölbeitingarlærdóm til að aðgreina grennd af rófum í einu. - Ný aðferð, CNNAEU, sem notar 2D földunartaugnanet fyrir bæði kóðara og afkóðara og er fyrsta birta aðferðin til að gera það. Aðferðin er þjálfuð á myndbútum (e.patches) og því er rúmfræðileg bygging myndarinnar sem greina á varðveitt í gegnum aðferðina. - Yfirgripsmikil og ítarlegt fræðilegt yfirlit yfir birtar sjálfkóðaraaðferðir fyrir fjölrásagreiningu. Gefinn er inngangur að sjálfkóðurum og elstu tegundir sjálfkóðara eru kynntar. Gefið er greinargott yfirlit yfir helstu birtar aðferðir fyrir fjölrásagreiningu sem byggja á sjálfkóðurum og gerður er gangrýninn samburður á 11 mismunandi sjálfkóðaraaðferðum.The Icelandic Research Fund under Grants 174075-05 and 207233-05

    Bayesian fusion of multi-band images : A powerful tool for super-resolution

    Get PDF
    Hyperspectral (HS) imaging, which consists of acquiring a same scene in several hundreds of contiguous spectral bands (a three dimensional data cube), has opened a new range of relevant applications, such as target detection [MS02], classification [C.-03] and spectral unmixing [BDPD+12]. However, while HS sensors provide abundant spectral information, their spatial resolution is generally more limited. Thus, fusing the HS image with other highly resolved images of the same scene, such as multispectral (MS) or panchromatic (PAN) images is an interesting problem. The problem of fusing a high spectral and low spatial resolution image with an auxiliary image of higher spatial but lower spectral resolution, also known as multi-resolution image fusion, has been explored for many years [AMV+11]. From an application point of view, this problem is also important as motivated by recent national programs, e.g., the Japanese next-generation space-borne hyperspectral image suite (HISUI), which fuses co-registered MS and HS images acquired over the same scene under the same conditions [YI13]. Bayesian fusion allows for an intuitive interpretation of the fusion process via the posterior distribution. Since the fusion problem is usually ill-posed, the Bayesian methodology offers a convenient way to regularize the problem by defining appropriate prior distribution for the scene of interest. The aim of this thesis is to study new multi-band image fusion algorithms to enhance the resolution of hyperspectral image. In the first chapter, a hierarchical Bayesian framework is proposed for multi-band image fusion by incorporating forward model, statistical assumptions and Gaussian prior for the target image to be restored. To derive Bayesian estimators associated with the resulting posterior distribution, two algorithms based on Monte Carlo sampling and optimization strategy have been developed. In the second chapter, a sparse regularization using dictionaries learned from the observed images is introduced as an alternative of the naive Gaussian prior proposed in Chapter 1. instead of Gaussian prior is introduced to regularize the ill-posed problem. Identifying the supports jointly with the dictionaries circumvented the difficulty inherent to sparse coding. To minimize the target function, an alternate optimization algorithm has been designed, which accelerates the fusion process magnificently comparing with the simulation-based method. In the third chapter, by exploiting intrinsic properties of the blurring and downsampling matrices, a much more efficient fusion method is proposed thanks to a closed-form solution for the Sylvester matrix equation associated with maximizing the likelihood. The proposed solution can be embedded into an alternating direction method of multipliers or a block coordinate descent method to incorporate different priors or hyper-priors for the fusion problem, allowing for Bayesian estimators. In the last chapter, a joint multi-band image fusion and unmixing scheme is proposed by combining the well admitted linear spectral mixture model and the forward model. The joint fusion and unmixing problem is solved in an alternating optimization framework, mainly consisting of solving a Sylvester equation and projecting onto a simplex resulting from the non-negativity and sum-to-one constraints. The simulation results conducted on synthetic and semi-synthetic images illustrate the advantages of the developed Bayesian estimators, both qualitatively and quantitatively

    Robust hyperspectral image reconstruction for scene simulation applications

    Get PDF
    This thesis presents the development of a spectral reconstruction method for multispectral (MSI) and hyperspectral (HSI) applications through an enhanced dictionary learning and spectral unmixing methodologies. Earth observation/surveillance is largely undertaken by MSI sensing such as that given by the Landsat, WorldView, Sentinel etc, however, the practical usefulness of the MSI data set is very limited. This is mainly because of the very limited number of wave bands that can be provided by the MSI imagery. One means to remedy this major shortcoming is to extend the MSI into HSI without the need of involving expensive hardware investment. Specifically, spectral reconstruction has been one of the most critical elements in applications such as Hyperspectral scene simulation. Hyperspectral scene simulation has been an important technique particularly for defence applications. Scene simulation creates a virtual scene such that modelling of the materials in the scene can be tailored freely to allow certain parameters of the model to be studied. In the defence sector this is the most cost-effective technique to allow the vulnerability of the soldiers/vehicles to be evaluated before they are deployed to a foreign ground. The simulation of a hyperspectral scene requires the details of materials in the scene, which is normally not available. Current state-of-the-art technology is trying to make use of the MSI satellite data, and to transform it into HSI for the hyperspectral scene simulation. One way to achieve this is through a reconstruction algorithm, commonly known as spectral reconstruction, which turns the MSI into HSI using an optimisation approach. The methodology that has been adopted in this thesis is the development of a robust dictionary learning to estimate the endmember (EM) robustly. Once the EM is found the abundance of materials in the scene can be subsequently estimated through a linear unmixing approach. Conventional approaches to the material allocation of most Hyperspectral scene simulator has been using the Texture Material Mapper (TMM) algorithm, which allocates materials from a spectral library (a collection of pre-compiled endmember iii iv materials) database according to the minimum spectral Euclidean distance difference to a candidate pixel of the scene. This approach has been shown (in this work) to be highly inaccurate with large scene reconstruction error. This research attempts to use a dictionary learning technique for material allocation, solving it as an optimisation problem with the objective of: (i) to reconstruct the scene as closely as possible to the ground truth with a fraction of error as that given by the TMM method, and (ii) to learn materials which are trace (2-3 times the number of species (i.e. intrinsic dimension) in the scene) cluster to ensure all material species in the scene is included for the scene reconstruction. Furthermore, two approaches complementing the goals of the learned dictionary through a rapid orthogonal matching pursuit (r-OMP) which enhances the performance of the orthogonal matching pursuit algorithm; and secondly a semi-blind approximation of the irradiance of all pixels in the scene including those in the shaded regions, have been proposed in this work. The main result of this research is the demonstration of the effectiveness of the proposed algorithms using real data set. The SCD-SOMP has been shown capable to learn both the background and trace materials even for a dictionary with small number of atoms (≈10). Also, the KMSCD method is found to be the more versatile with overcomplete (non-orthogonal) dictionary capable to learn trace materials with high scene reconstruction accuracy (2x of accuracy enhancement over that simulated using the TMM method. Although this work has achieved an incremental improvement in spectral reconstruction, however, the need of dictionary training using hyperspectral data set in this thesis has been identified as one limitation which is needed to be removed for the future direction of research
    corecore