40 research outputs found
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Implementação em hardware reconfiguråvel de método de separação de dados hiperespetrais
RelatĂłrio do Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de ElectrĂłnica e TelecomunicaçÔesOs sensores hiperespetrais adquirem grandes quantidades de dados com uma elevada resolução espetral. Esses dados sĂŁo utilizados em aplicaçÔes para classificar uma ĂĄrea da superfĂcie terrestre ou detetar um determinado alvo. No entanto, existem aplicaçÔes que requerem processamento em tempo-real. Recentemente, sistemas de processamento a bordo tĂȘm surgido para reduzir a quantidade de dados a ser transmitida para as estaçÔes base e assim reduzir o atraso entre a transmissĂŁo e a anĂĄlise dos dados. Sistemas esses compactos, com hardware reconfigurĂĄvel, como os field programmable gate arrays (FPGAs).
O presente trabalho propÔe uma arquitetura num FPGA, que paraleliza o método vertex components analysis (VCA)de separação de dados hiperespetrais. Este trabalho é desenvolvido na placa ZedBoard que contém um Xilinx Zynq R -7000 XC7Z020.
Na primeira fase realiza-se uma anĂĄlise ao desempenho do mĂ©todo sem o pre--processamento de redução de dados, em termos espetrais. O mĂ©todo Ă© otimizado, para reduzir o seu peso e complexidade computacional. O processo de ortogonalização Ă© a parte mais pesada do mĂ©todo, Ă© realizada por uma decomposição de valores singulares (singular value decomposition - SVD). Este processo Ă© simplificado por uma decomposição QR que reutiliza os vetores ortogonais jĂĄ determinados. Ă ainda analisado o tipo de precisĂŁo que o mĂ©todo necessita para manter o mesmo desempenho e Ă© concluĂdo que necessita de pelo menos 48-bit vĂrgula fixa ou flutuante 32-bit.
Na segunda fase projeta-se uma arquitetura que paraleliza o método otimizado.
Esta Ă© escalĂĄvel e consegue processar vĂĄrios pĂxeis e/ou bandas espetrais em paralelo.
A arquitetura Ă© implementada e dimensionada para o sensor AVIRIS, onde este captura 512 pĂxeis com 224 bandas espetrais em 8,3 ms e a arquitetura processa 614 pĂxeis e determina oito assinaturas espetrais em 1,57 ms, ou seja, a arquitetura implementada Ă© apropriada para processamento em tempo-real de dados hiperespetrais.Abstract: The Hyperspectral sensors acquire large datasets with high spectral resolution.
These datasets are used to classify or detect a specific target over an area of Earth surface. However, there are applications that require real-time processing. Recently, on-board processing systems have emerged to reduce the amount of data that is transmitted to the ground base stations and thereby reduce the delay between the transmission and data analysis. On-board systems need to be compact, such as field programmable gate arrays (FPGAs).
This work presents a FPGA architecture, that parallels the vertex components analysis (VCA) method for hyperspectral unmixing data. This work is developed on a ZedBoard board, which contains a Xilinx Zynq R -7000 XC7Z020.
In the first phase an analysis of the methodâs performance without dimensionality reduction pre-processing step, in spectral terms, is conducted. The method have been also optimized, to reduce its computational weight and complexity. The orthogonal process, performed on the singular value decomposition (SVD) used in the original method, is the most complex part of the algorithm. This process is simplified using a QR decomposition that reuses the orthogonal vectors already determined. Its also analysed the type of precision that the method needs to maintain the same performance. In the present work it is concluded that the method requires at least 48-bit fixed-point or 32-bit floating-point.
In the second phase is projected an architecture that parallels the optimized method, which is scalable and can process multiple pixels and/or spectral bands in parallel.
The architecture is implemented and dimensioned to AVIRIS sensor, which acquires 512 pixels with 224 spectral bands in 8,3 ms, the architecture processes 614 pixels and extracts eight spectral signatures in 1,57 ms, therefore one can conclude that the implemented architecture is appropriated for real-time hyperspectral data processing
Mineral identification using data-mining in hyperspectral infrared imagery
Les applications de lâimagerie infrarouge dans le domaine de la gĂ©ologie sont principalement des applications hyperspectrales. Elles permettent entre autre lâidentification minĂ©rale, la cartographie, ainsi que lâestimation de la portĂ©e. Le plus souvent, ces acquisitions sont rĂ©alisĂ©es in-situ soit Ă lâaide de capteurs aĂ©roportĂ©s, soit Ă lâaide de dispositifs portatifs. La dĂ©couverte de minĂ©raux indicateurs a permis dâamĂ©liorer grandement lâexploration minĂ©rale. Ceci est en partie dĂ» Ă lâutilisation dâinstruments portatifs. Dans ce contexte le dĂ©veloppement de systĂšmes automatisĂ©s permettrait dâaugmenter Ă la fois la qualitĂ© de lâexploration et la prĂ©cision de la dĂ©tection des indicateurs. Câest dans ce cadre que sâinscrit le travail menĂ© dans ce doctorat. Le sujet consistait en lâutilisation de mĂ©thodes dâapprentissage automatique appliquĂ©es Ă lâanalyse (au traitement) dâimages hyperspectrales prises dans les longueurs dâonde infrarouge. Lâobjectif recherchĂ© Ă©tant lâidentification de grains minĂ©raux de petites tailles utilisĂ©s comme indicateurs minĂ©ral -ogiques. Une application potentielle de cette recherche serait le dĂ©veloppement dâun outil logiciel dâassistance pour lâanalyse des Ă©chantillons lors de lâexploration minĂ©rale. Les expĂ©riences ont Ă©tĂ© menĂ©es en laboratoire dans la gamme relative Ă lâinfrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m Ă 11.8 m. Ces essais ont permis de proposer une mĂ©thode pour calculer lâannulation du continuum. La mĂ©thode utilisĂ©e lors de ces essais utilise la factorisation matricielle non nĂ©gative (NMF). En utlisant une factorisation du premier ordre on peut dĂ©duire le rayonnement de pĂ©nĂ©tration, lequel peut ensuite ĂȘtre comparĂ© et analysĂ© par rapport Ă dâautres mĂ©thodes plus communes. Lâanalyse des rĂ©sultats spectraux en comparaison avec plusieurs bibliothĂšques existantes de donnĂ©es a permis de mettre en Ă©vidence la suppression du continuum. Les expĂ©rience ayant menĂ©s Ă ce rĂ©sultat ont Ă©tĂ© conduites en utilisant une plaque Infragold ainsi quâun objectif macro LWIR. Lâidentification automatique de grains de diffĂ©rents matĂ©riaux tels que la pyrope, lâolivine et le quartz a commencĂ©. Lors dâune phase de comparaison entre des approches supervisĂ©es et non supervisĂ©es, cette derniĂšre sâest montrĂ©e plus appropriĂ© en raison du comportement indĂ©pendant par rapport Ă lâĂ©tape dâentraĂźnement. Afin de confirmer la qualitĂ© de ces rĂ©sultats quatre expĂ©riences ont Ă©tĂ© menĂ©es. Lors dâune premiĂšre expĂ©rience deux algorithmes ont Ă©tĂ© Ă©valuĂ©s pour application de regroupements en utilisant lâapproche FCC (False Colour Composite). Cet essai a permis dâobserver une vitesse de convergence, jusquâa vingt fois plus rapide, ainsi quâune efficacitĂ© significativement accrue concernant lâidentification en comparaison des rĂ©sultats de la littĂ©rature. Cependant des essais effectuĂ©s sur des donnĂ©es LWIR ont montrĂ© un manque de prĂ©diction de la surface du grain lorsque les grains Ă©taient irrĂ©guliers avec prĂ©sence dâagrĂ©gats minĂ©raux. La seconde expĂ©rience a consistĂ©, en une analyse quantitaive comparative entre deux bases de donnĂ©es de Ground Truth (GT), nommĂ©e rigid-GT et observed-GT (rigide-GT: Ă©tiquet manuel de la rĂ©gion, observĂ©e-GT:Ă©tiquetage manuel les pixels). La prĂ©cision des rĂ©sultats Ă©tait 1.5 fois meilleur lorsque lâon a utlisĂ© la base de donnĂ©es observed-GT que rigid-GT. Pour les deux derniĂšres epxĂ©rience, des donnĂ©es venant dâun MEB (Microscope Ălectronique Ă Balayage) ainsi que dâun microscopie Ă fluorescence (XRF) ont Ă©tĂ© ajoutĂ©es. Ces donnĂ©es ont permis dâintroduire des informations relatives tant aux agrĂ©gats minĂ©raux quâĂ la surface des grains. Les rĂ©sultats ont Ă©tĂ© comparĂ©s par des techniques dâidentification automatique des minĂ©raux, utilisant ArcGIS. Cette derniĂšre a montrĂ© une performance prometteuse quand Ă lâidentification automatique et Ă aussi Ă©tĂ© utilisĂ©e pour la GT de validation. Dans lâensemble, les quatre mĂ©thodes de cette thĂšse reprĂ©sentent des mĂ©thodologies bĂ©nĂ©fiques pour lâidentification des minĂ©raux. Ces mĂ©thodes prĂ©sentent lâavantage dâĂȘtre non-destructives, relativement prĂ©cises et dâavoir un faible coĂ»t en temps calcul ce qui pourrait les qualifier pour ĂȘtre utilisĂ©e dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7ÎŒm to 11.8ÎŒm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grainâs surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grainâs surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field
Compressive Sensing and Imaging Applications
Compressive sensing (CS) is a new sampling theory which allows reconstructing signals using sub-Nyquist measurements. It states that a signal can be recovered exactly from randomly undersampled data points if the signal exhibits sparsity in some transform domain (wavelet, Fourier, etc). Instead of measuring it uniformly in a local scheme, signal is correlated with a series of sensing waveforms. These waveforms are so called sensing matrix or measurement matrix. Every measurement is a linear combination of randomly picked signal components. By applying a nonlinear convex optimization algorithm, the original can be recovered. Therefore, signal acquisition and compression are realized simultaneously and the amount of information to be processed is considerably reduced. Due to its unique sensing and reconstruction mechanism, CS creates a new situation in signal acquisition hardware design as well as software development, to handle the increasing pressure on imaging sensors for sensing modalities beyond visible (ultraviolet, infrared, terahertz etc.) and algorithms to accommodate demands for higher-dimensional datasets (hyperspectral or video data cubes). The combination of CS with traditional optical imaging extends the capabilities and also improves the performance of existing equipments and systems. Our research work is focused on the direct application of compressive sensing for imaging in both 2D and 3D cases, such as infrared imaging, hyperspectral imaging and sum frequency generation microscopy. Data acquisition and compression are combined into one step. The computational complexity is passed to the receiving end, which always contains sufficient computer processing power. The sensing stage requirement is pushed to the simplest and cheapest level. In short, simple optical engine structure, robust measuring method and high speed acquisition make compressive sensing-based imaging system a strong competitor to the traditional one. These applications have and will benefit our lives in a deeper and wider way
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
TĂ©cnicas de compresiĂłn de imĂĄgenes hiperespectrales sobre hardware reconfigurable
Tesis de la Universidad Complutense de Madrid, Facultad de InformĂĄtica, leĂda el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en dĂa en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y ademĂĄs facilita el trabajo cientĂfico. Una de las preocupaciones recurrentes en las Ășltimas dĂ©cadas ha sido la sotenibilidad del planeta, y cĂłmo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satĂ©lites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades ĂĄreas para anĂĄlisis mĂĄs precisos...Fac. de InformĂĄticaTRUEunpu
A Survey on FPGA-Based Sensor Systems: Towards Intelligent and Reconfigurable Low-Power Sensors for Computer Vision, Control and Signal Processing
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.The research leading to these results has received funding from the Spanish Government and European FEDER funds (DPI2012-32390), the Valencia Regional Government (PROMETEO/2013/085) and the University of Alicante (GRE12-17)