40 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Implementação em hardware reconfiguråvel de método de separação de dados hiperespetrais

    Get PDF
    RelatĂłrio do Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de ElectrĂłnica e TelecomunicaçÔesOs sensores hiperespetrais adquirem grandes quantidades de dados com uma elevada resolução espetral. Esses dados sĂŁo utilizados em aplicaçÔes para classificar uma ĂĄrea da superfĂ­cie terrestre ou detetar um determinado alvo. No entanto, existem aplicaçÔes que requerem processamento em tempo-real. Recentemente, sistemas de processamento a bordo tĂȘm surgido para reduzir a quantidade de dados a ser transmitida para as estaçÔes base e assim reduzir o atraso entre a transmissĂŁo e a anĂĄlise dos dados. Sistemas esses compactos, com hardware reconfigurĂĄvel, como os field programmable gate arrays (FPGAs). O presente trabalho propĂ”e uma arquitetura num FPGA, que paraleliza o mĂ©todo vertex components analysis (VCA)de separação de dados hiperespetrais. Este trabalho Ă© desenvolvido na placa ZedBoard que contĂ©m um Xilinx Zynq R -7000 XC7Z020. Na primeira fase realiza-se uma anĂĄlise ao desempenho do mĂ©todo sem o pre--processamento de redução de dados, em termos espetrais. O mĂ©todo Ă© otimizado, para reduzir o seu peso e complexidade computacional. O processo de ortogonalização Ă© a parte mais pesada do mĂ©todo, Ă© realizada por uma decomposição de valores singulares (singular value decomposition - SVD). Este processo Ă© simplificado por uma decomposição QR que reutiliza os vetores ortogonais jĂĄ determinados. É ainda analisado o tipo de precisĂŁo que o mĂ©todo necessita para manter o mesmo desempenho e Ă© concluĂ­do que necessita de pelo menos 48-bit vĂ­rgula fixa ou flutuante 32-bit. Na segunda fase projeta-se uma arquitetura que paraleliza o mĂ©todo otimizado. Esta Ă© escalĂĄvel e consegue processar vĂĄrios pĂ­xeis e/ou bandas espetrais em paralelo. A arquitetura Ă© implementada e dimensionada para o sensor AVIRIS, onde este captura 512 pĂ­xeis com 224 bandas espetrais em 8,3 ms e a arquitetura processa 614 pĂ­xeis e determina oito assinaturas espetrais em 1,57 ms, ou seja, a arquitetura implementada Ă© apropriada para processamento em tempo-real de dados hiperespetrais.Abstract: The Hyperspectral sensors acquire large datasets with high spectral resolution. These datasets are used to classify or detect a specific target over an area of Earth surface. However, there are applications that require real-time processing. Recently, on-board processing systems have emerged to reduce the amount of data that is transmitted to the ground base stations and thereby reduce the delay between the transmission and data analysis. On-board systems need to be compact, such as field programmable gate arrays (FPGAs). This work presents a FPGA architecture, that parallels the vertex components analysis (VCA) method for hyperspectral unmixing data. This work is developed on a ZedBoard board, which contains a Xilinx Zynq R -7000 XC7Z020. In the first phase an analysis of the method’s performance without dimensionality reduction pre-processing step, in spectral terms, is conducted. The method have been also optimized, to reduce its computational weight and complexity. The orthogonal process, performed on the singular value decomposition (SVD) used in the original method, is the most complex part of the algorithm. This process is simplified using a QR decomposition that reuses the orthogonal vectors already determined. Its also analysed the type of precision that the method needs to maintain the same performance. In the present work it is concluded that the method requires at least 48-bit fixed-point or 32-bit floating-point. In the second phase is projected an architecture that parallels the optimized method, which is scalable and can process multiple pixels and/or spectral bands in parallel. The architecture is implemented and dimensioned to AVIRIS sensor, which acquires 512 pixels with 224 spectral bands in 8,3 ms, the architecture processes 614 pixels and extracts eight spectral signatures in 1,57 ms, therefore one can conclude that the implemented architecture is appropriated for real-time hyperspectral data processing

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la gĂ©ologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minĂ©rale, la cartographie, ainsi que l’estimation de la portĂ©e. Le plus souvent, ces acquisitions sont rĂ©alisĂ©es in-situ soit Ă  l’aide de capteurs aĂ©roportĂ©s, soit Ă  l’aide de dispositifs portatifs. La dĂ©couverte de minĂ©raux indicateurs a permis d’amĂ©liorer grandement l’exploration minĂ©rale. Ceci est en partie dĂ» Ă  l’utilisation d’instruments portatifs. Dans ce contexte le dĂ©veloppement de systĂšmes automatisĂ©s permettrait d’augmenter Ă  la fois la qualitĂ© de l’exploration et la prĂ©cision de la dĂ©tection des indicateurs. C’est dans ce cadre que s’inscrit le travail menĂ© dans ce doctorat. Le sujet consistait en l’utilisation de mĂ©thodes d’apprentissage automatique appliquĂ©es Ă  l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherchĂ© Ă©tant l’identification de grains minĂ©raux de petites tailles utilisĂ©s comme indicateurs minĂ©ral -ogiques. Une application potentielle de cette recherche serait le dĂ©veloppement d’un outil logiciel d’assistance pour l’analyse des Ă©chantillons lors de l’exploration minĂ©rale. Les expĂ©riences ont Ă©tĂ© menĂ©es en laboratoire dans la gamme relative Ă  l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m Ă  11.8 m. Ces essais ont permis de proposer une mĂ©thode pour calculer l’annulation du continuum. La mĂ©thode utilisĂ©e lors de ces essais utilise la factorisation matricielle non nĂ©gative (NMF). En utlisant une factorisation du premier ordre on peut dĂ©duire le rayonnement de pĂ©nĂ©tration, lequel peut ensuite ĂȘtre comparĂ© et analysĂ© par rapport Ă  d’autres mĂ©thodes plus communes. L’analyse des rĂ©sultats spectraux en comparaison avec plusieurs bibliothĂšques existantes de donnĂ©es a permis de mettre en Ă©vidence la suppression du continuum. Les expĂ©rience ayant menĂ©s Ă  ce rĂ©sultat ont Ă©tĂ© conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de diffĂ©rents matĂ©riaux tels que la pyrope, l’olivine et le quartz a commencĂ©. Lors d’une phase de comparaison entre des approches supervisĂ©es et non supervisĂ©es, cette derniĂšre s’est montrĂ©e plus appropriĂ© en raison du comportement indĂ©pendant par rapport Ă  l’étape d’entraĂźnement. Afin de confirmer la qualitĂ© de ces rĂ©sultats quatre expĂ©riences ont Ă©tĂ© menĂ©es. Lors d’une premiĂšre expĂ©rience deux algorithmes ont Ă©tĂ© Ă©valuĂ©s pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacitĂ© significativement accrue concernant l’identification en comparaison des rĂ©sultats de la littĂ©rature. Cependant des essais effectuĂ©s sur des donnĂ©es LWIR ont montrĂ© un manque de prĂ©diction de la surface du grain lorsque les grains Ă©taient irrĂ©guliers avec prĂ©sence d’agrĂ©gats minĂ©raux. La seconde expĂ©rience a consistĂ©, en une analyse quantitaive comparative entre deux bases de donnĂ©es de Ground Truth (GT), nommĂ©e rigid-GT et observed-GT (rigide-GT: Ă©tiquet manuel de la rĂ©gion, observĂ©e-GT:Ă©tiquetage manuel les pixels). La prĂ©cision des rĂ©sultats Ă©tait 1.5 fois meilleur lorsque l’on a utlisĂ© la base de donnĂ©es observed-GT que rigid-GT. Pour les deux derniĂšres epxĂ©rience, des donnĂ©es venant d’un MEB (Microscope Électronique Ă  Balayage) ainsi que d’un microscopie Ă  fluorescence (XRF) ont Ă©tĂ© ajoutĂ©es. Ces donnĂ©es ont permis d’introduire des informations relatives tant aux agrĂ©gats minĂ©raux qu’à la surface des grains. Les rĂ©sultats ont Ă©tĂ© comparĂ©s par des techniques d’identification automatique des minĂ©raux, utilisant ArcGIS. Cette derniĂšre a montrĂ© une performance prometteuse quand Ă  l’identification automatique et Ă  aussi Ă©tĂ© utilisĂ©e pour la GT de validation. Dans l’ensemble, les quatre mĂ©thodes de cette thĂšse reprĂ©sentent des mĂ©thodologies bĂ©nĂ©fiques pour l’identification des minĂ©raux. Ces mĂ©thodes prĂ©sentent l’avantage d’ĂȘtre non-destructives, relativement prĂ©cises et d’avoir un faible coĂ»t en temps calcul ce qui pourrait les qualifier pour ĂȘtre utilisĂ©e dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7ÎŒm to 11.8ÎŒm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Compressive Sensing and Imaging Applications

    Get PDF
    Compressive sensing (CS) is a new sampling theory which allows reconstructing signals using sub-Nyquist measurements. It states that a signal can be recovered exactly from randomly undersampled data points if the signal exhibits sparsity in some transform domain (wavelet, Fourier, etc). Instead of measuring it uniformly in a local scheme, signal is correlated with a series of sensing waveforms. These waveforms are so called sensing matrix or measurement matrix. Every measurement is a linear combination of randomly picked signal components. By applying a nonlinear convex optimization algorithm, the original can be recovered. Therefore, signal acquisition and compression are realized simultaneously and the amount of information to be processed is considerably reduced. Due to its unique sensing and reconstruction mechanism, CS creates a new situation in signal acquisition hardware design as well as software development, to handle the increasing pressure on imaging sensors for sensing modalities beyond visible (ultraviolet, infrared, terahertz etc.) and algorithms to accommodate demands for higher-dimensional datasets (hyperspectral or video data cubes). The combination of CS with traditional optical imaging extends the capabilities and also improves the performance of existing equipments and systems. Our research work is focused on the direct application of compressive sensing for imaging in both 2D and 3D cases, such as infrared imaging, hyperspectral imaging and sum frequency generation microscopy. Data acquisition and compression are combined into one step. The computational complexity is passed to the receiving end, which always contains sufficient computer processing power. The sensing stage requirement is pushed to the simplest and cheapest level. In short, simple optical engine structure, robust measuring method and high speed acquisition make compressive sensing-based imaging system a strong competitor to the traditional one. These applications have and will benefit our lives in a deeper and wider way

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    TĂ©cnicas de compresiĂłn de imĂĄgenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de InformĂĄtica, leĂ­da el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en dĂ­a en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y ademĂĄs facilita el trabajo cientĂ­fico. Una de las preocupaciones recurrentes en las Ășltimas dĂ©cadas ha sido la sotenibilidad del planeta, y cĂłmo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satĂ©lites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades ĂĄreas para anĂĄlisis mĂĄs precisos...Fac. de InformĂĄticaTRUEunpu

    A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing

    Get PDF
    The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)
    corecore