160 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Optimizing spectral bands of airborne imager for tree species classification

    Get PDF

    Data driven estimation of soil and vegetation attributes using airborne remote sensing

    Get PDF
    Airborne remote sensing using imaging spectroscopy and LiDAR (Light Detection and Ranging) measurements enable us to quantify ecosystem and land surface attributes. In this study we use high resolution airborne remote sensing to characterize soil attributes and the structure of vegetation canopy. Soil texture, organic matter, and chemical constituents are critical to ecosystem functioning, plant growth, and food security. However, most of the soil data available globally are of coarse resolutions at scales of 1:5 million and lack quantitative information for modeling and land management decisions at field or catchment scales. Thus the need for a spatially contiguous quantitative soil information is of immense scientific merit which can be obtained using airborne and space-borne imaging spectroscopy. Towards this goal we systematically explore the feasibility of characterizing soil properties from imaging spectroscopy using data driven modeling approaches. We have developed a modeling framework for quantitative prediction of different soil attributes using airborne imaging spectroscopy and limited field soil grab sample datasets. The results of our analysis using fine resolution (7.6m) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data collected over midwestern United States immediately after the large 2011 Mississippi River flood indicate the feasibility of using the developed models for quantitative spatial prediction of soil attributes over large areas (> 700 sq. km) of the landscape. The quantitative predictions reveal coherent spatial correlations of the difference in constituent concentrations with legacy landscape features, and immediate disturbances on the landscape due to extreme events. Further for model validation using independent test data, we demonstrate that the results are better represented as a probability density function compared to a single validation subset. We have simulated up-scaled datasets at multiple spatial resolutions ranging from 10m to 90m from the AVIRIS data, including future space based Hyperspectral Infrared Imager (HyspIRI) like observations. These datasets are used to investigate the applicability of the developed modeling framework over increasing spatial resolutions on the characterization of soil constituents. We have outlined an evaluation framework with a set of metrics that considers the point-scale model performance as well as the consistency of cross-scale spatial predictions. The results indicate that the ensemble quantification method is scalable over the entire range of airborne to space-borne spatial resolutions and establishes the feasibility of quantification of soil constituents from space- based observations. Further, we develop a retrieval framework from satellites, which combines the developed modeling framework and spectral similarity measures for global scale characterization of soils using a weighted constrained optimization framework. The retrieval algorithm takes advantage of the potential of repeat temporal satellite measurements to evolve a dynamic spectral library and improve soil characterization. Finally, we demonstrate that in addition to soil constituents, hyperspectral data can add value to characterizations of leaf area density (LAD) estimations for dense overlapping canopies. We develop a method for the estimation of the vertical distribution of foliage or LAD using a combination of airborne LiDAR and hyperspectral data using a feature based data fusion approach. Tree species classification from hyperspectral data is used to develop a novel ellipsoidal ‘tree shaped’ voxel approach for characterizing the LAD of individual trees in a riparian forest setting. We found that the tree shaped voxels represents a more realistic characterization of the upper and middle parts of the tree canopy in terms of higher LAD values, for trees of different heights in a forest stand

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la gĂ©ologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minĂ©rale, la cartographie, ainsi que l’estimation de la portĂ©e. Le plus souvent, ces acquisitions sont rĂ©alisĂ©es in-situ soit Ă  l’aide de capteurs aĂ©roportĂ©s, soit Ă  l’aide de dispositifs portatifs. La dĂ©couverte de minĂ©raux indicateurs a permis d’amĂ©liorer grandement l’exploration minĂ©rale. Ceci est en partie dĂ» Ă  l’utilisation d’instruments portatifs. Dans ce contexte le dĂ©veloppement de systĂšmes automatisĂ©s permettrait d’augmenter Ă  la fois la qualitĂ© de l’exploration et la prĂ©cision de la dĂ©tection des indicateurs. C’est dans ce cadre que s’inscrit le travail menĂ© dans ce doctorat. Le sujet consistait en l’utilisation de mĂ©thodes d’apprentissage automatique appliquĂ©es Ă  l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherchĂ© Ă©tant l’identification de grains minĂ©raux de petites tailles utilisĂ©s comme indicateurs minĂ©ral -ogiques. Une application potentielle de cette recherche serait le dĂ©veloppement d’un outil logiciel d’assistance pour l’analyse des Ă©chantillons lors de l’exploration minĂ©rale. Les expĂ©riences ont Ă©tĂ© menĂ©es en laboratoire dans la gamme relative Ă  l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m Ă  11.8 m. Ces essais ont permis de proposer une mĂ©thode pour calculer l’annulation du continuum. La mĂ©thode utilisĂ©e lors de ces essais utilise la factorisation matricielle non nĂ©gative (NMF). En utlisant une factorisation du premier ordre on peut dĂ©duire le rayonnement de pĂ©nĂ©tration, lequel peut ensuite ĂȘtre comparĂ© et analysĂ© par rapport Ă  d’autres mĂ©thodes plus communes. L’analyse des rĂ©sultats spectraux en comparaison avec plusieurs bibliothĂšques existantes de donnĂ©es a permis de mettre en Ă©vidence la suppression du continuum. Les expĂ©rience ayant menĂ©s Ă  ce rĂ©sultat ont Ă©tĂ© conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de diffĂ©rents matĂ©riaux tels que la pyrope, l’olivine et le quartz a commencĂ©. Lors d’une phase de comparaison entre des approches supervisĂ©es et non supervisĂ©es, cette derniĂšre s’est montrĂ©e plus appropriĂ© en raison du comportement indĂ©pendant par rapport Ă  l’étape d’entraĂźnement. Afin de confirmer la qualitĂ© de ces rĂ©sultats quatre expĂ©riences ont Ă©tĂ© menĂ©es. Lors d’une premiĂšre expĂ©rience deux algorithmes ont Ă©tĂ© Ă©valuĂ©s pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacitĂ© significativement accrue concernant l’identification en comparaison des rĂ©sultats de la littĂ©rature. Cependant des essais effectuĂ©s sur des donnĂ©es LWIR ont montrĂ© un manque de prĂ©diction de la surface du grain lorsque les grains Ă©taient irrĂ©guliers avec prĂ©sence d’agrĂ©gats minĂ©raux. La seconde expĂ©rience a consistĂ©, en une analyse quantitaive comparative entre deux bases de donnĂ©es de Ground Truth (GT), nommĂ©e rigid-GT et observed-GT (rigide-GT: Ă©tiquet manuel de la rĂ©gion, observĂ©e-GT:Ă©tiquetage manuel les pixels). La prĂ©cision des rĂ©sultats Ă©tait 1.5 fois meilleur lorsque l’on a utlisĂ© la base de donnĂ©es observed-GT que rigid-GT. Pour les deux derniĂšres epxĂ©rience, des donnĂ©es venant d’un MEB (Microscope Électronique Ă  Balayage) ainsi que d’un microscopie Ă  fluorescence (XRF) ont Ă©tĂ© ajoutĂ©es. Ces donnĂ©es ont permis d’introduire des informations relatives tant aux agrĂ©gats minĂ©raux qu’à la surface des grains. Les rĂ©sultats ont Ă©tĂ© comparĂ©s par des techniques d’identification automatique des minĂ©raux, utilisant ArcGIS. Cette derniĂšre a montrĂ© une performance prometteuse quand Ă  l’identification automatique et Ă  aussi Ă©tĂ© utilisĂ©e pour la GT de validation. Dans l’ensemble, les quatre mĂ©thodes de cette thĂšse reprĂ©sentent des mĂ©thodologies bĂ©nĂ©fiques pour l’identification des minĂ©raux. Ces mĂ©thodes prĂ©sentent l’avantage d’ĂȘtre non-destructives, relativement prĂ©cises et d’avoir un faible coĂ»t en temps calcul ce qui pourrait les qualifier pour ĂȘtre utilisĂ©e dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7ÎŒm to 11.8ÎŒm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Phenomenological modeling of image irradiance for non-Lambertian surfaces under natural illumination.

    Get PDF
    Various vision tasks are usually confronted by appearance variations due to changes of illumination. For instance, in a recognition system, it has been shown that the variability in human face appearance is owed to changes to lighting conditions rather than person\u27s identity. Theoretically, due to the arbitrariness of the lighting function, the space of all possible images of a fixed-pose object under all possible illumination conditions is infinite dimensional. Nonetheless, it has been proven that the set of images of a convex Lambertian surface under distant illumination lies near a low dimensional linear subspace. This result was also extended to include non-Lambertian objects with non-convex geometry. As such, vision applications, concerned with the recovery of illumination, reflectance or surface geometry from images, would benefit from a low-dimensional generative model which captures appearance variations w.r.t. illumination conditions and surface reflectance properties. This enables the formulation of such inverse problems as parameter estimation. Typically, subspace construction boils to performing a dimensionality reduction scheme, e.g. Principal Component Analysis (PCA), on a large set of (real/synthesized) images of object(s) of interest with fixed pose but different illumination conditions. However, this approach has two major problems. First, the acquired/rendered image ensemble should be statistically significant vis-a-vis capturing the full behavior of the sources of variations that is of interest, in particular illumination and reflectance. Second, the curse of dimensionality hinders numerical methods such as Singular Value Decomposition (SVD) which becomes intractable especially with large number of large-sized realizations in the image ensemble. One way to bypass the need of large image ensemble is to construct appearance subspaces using phenomenological models which capture appearance variations through mathematical abstraction of the reflection process. In particular, the harmonic expansion of the image irradiance equation can be used to derive an analytic subspace to represent images under fixed pose but different illumination conditions where the image irradiance equation has been formulated in a convolution framework. Due to their low-frequency nature, irradiance signals can be represented using low-order basis functions, where Spherical Harmonics (SH) has been extensively adopted. Typically, an ideal solution to the image irradiance (appearance) modeling problem should be able to incorporate complex illumination, cast shadows as well as realistic surface reflectance properties, while moving away from the simplifying assumptions of Lambertian reflectance and single-source distant illumination. By handling arbitrary complex illumination and non-Lambertian reflectance, the appearance model proposed in this dissertation moves the state of the art closer to the ideal solution. This work primarily addresses the geometrical compliance of the hemispherical basis for representing surface reflectance while presenting a compact, yet accurate representation for arbitrary materials. To maintain the plausibility of the resulting appearance, the proposed basis is constructed in a manner that satisfies the Helmholtz reciprocity property while avoiding high computational complexity. It is believed that having the illumination and surface reflectance represented in the spherical and hemispherical domains respectively, while complying with the physical properties of the surface reflectance would provide better approximation accuracy of image irradiance when compared to the representation in the spherical domain. Discounting subsurface scattering and surface emittance, this work proposes a surface reflectance basis, based on hemispherical harmonics (HSH), defined on the Cartesian product of the incoming and outgoing local hemispheres (i.e. w.r.t. surface points). This basis obeys physical properties of surface reflectance involving reciprocity and energy conservation. The basis functions are validated using analytical reflectance models as well as scattered reflectance measurements which might violate the Helmholtz reciprocity property (this can be filtered out through the process of projecting them on the subspace spanned by the proposed basis, where the reciprocity property is preserved in the least-squares sense). The image formation process of isotropic surfaces under arbitrary distant illumination is also formulated in the frequency space where the orthogonality relation between illumination and reflectance bases is encoded in what is termed as irradiance harmonics. Such harmonics decouple the effect of illumination and reflectance from the underlying pose and geometry. Further, a bilinear approach to analytically construct irradiance subspace is proposed in order to tackle the inherent problem of small-sample-size and curse of dimensionality. The process of finding the analytic subspace is posed as establishing a relation between its principal components and that of the irradiance harmonics basis functions. It is also shown how to incorporate prior information about natural illumination and real-world surface reflectance characteristics in order to capture the full behavior of complex illumination and non-Lambertian reflectance. The use of the presented theoretical framework to develop practical algorithms for shape recovery is further presented where the hitherto assumed Lambertian assumption is relaxed. With a single image of unknown general illumination, the underlying geometrical structure can be recovered while accounting explicitly for object reflectance characteristics (e.g. human skin types for facial images and teeth reflectance for human jaw reconstruction) as well as complex illumination conditions. Experiments on synthetic and real images illustrate the robustness of the proposed appearance model vis-a-vis illumination variation. Keywords: computer vision, computer graphics, shading, illumination modeling, reflectance representation, image irradiance, frequency space representations, {hemi)spherical harmonics, analytic bilinear PCA, model-based bilinear PCA, 3D shape reconstruction, statistical shape from shading

    Computational intelligence techniques for maritime and coastal remote sensing

    Get PDF
    The aim of this thesis is to investigate the potential of computational intelligence techniques for some applications in the analysis of remotely sensed multi-spectral images. In particular, two problems are addressed. The first one is the classification of oil spills at sea, while the second one is the estimation of sea bottom depth. In both cases, the exploitation of optical satellite data allows to develop operational tools for easily accessing and monitoring large marine areas, in an efficient and cost effective way. Regarding the oil spill problem, today public opinion is certainly aware of the huge impact that oil tanker accidents and oil rig leaks have on marine and coastal environment. However, it is less known that most of the oil released in our seas cannot be ascribed to accidental spills, but rather to illegal ballast waters discharge, and to pollutant dumping at sea, during routine operations of oil tankers. For this reason, any effort for improving oil spill detection systems is of great importance. So far, Synthetic Aperture Radar (SAR) data have been preferred to multi-spectral data for oil spill detection applications, because of their all-weather and all-day capabilities, while optical images necessitate of clear sky conditions and day-light. On the other hand, many features make an optical approach desirable, such as lower cost and higher revisit time. Moreover, unlike SAR data, optical data are not affected by sea state, and are able to reduce false alarm rate, since they do not suffer from the main false alarm source in SAR data, that is represented by the presence of calm sea regions. In this thesis the problem of oil spill classification is tackled by applying different machine learning techniques to a significant dataset of regions of interest, collected in multi-spectral satellite images, acquired by MODIS sensor. These regions are then classified in one of two possible classes, that are oil spills and look-alikes, where look-alikes include any phenomena other than oil spills (e.g. algal blooms...). Results show that efficient and reliable oil spill classification systems based on optical data are feasible, and could offer a valuable support to the existing satellite-based monitoring systems. The estimation of sea bottom depth from high resolution multi-spectral satellite images is the second major topic of this thesis. The motivations for dealing with this problem arise from the necessity of limiting expensive and time consuming measurement campaigns. Since satellite data allow to quickly analyse large areas, a solution for this issue is to employ intelligent techniques, which, by exploiting a small set of depth measurements, are able to extend bathymetry estimate to a much larger area, covered by a multi-spectral satellite image. Such techniques, once that the training phase has been completed, allow to achieve very accurate results, and, thanks to their generalization capabilities, provide reliable bathymetric maps which cover wide areas. A crucial element is represented by the training dataset, which is built by coupling a number of depth measurements, located in a limited part of the image, with corresponding radiances, acquired by the satellite sensor. A successful estimate essentially depends on how the training dataset resembles the rest of the scene. On the other hand, the result is not affected by model uncertainties and systematic errors, as results from model-based analytic approaches are. In this thesis a neuro-fuzzy technique is applied to two case studies, more precisely, two high resolution multi-spectral images related to the same area, but acquired in different years and in different meteorological conditions. Different situations of in-situ depths availability are considered in the study, and the effect of limited in-situ data availability on performance is evaluated. The effect of both meteorological conditions and training set size reduction on the overall performance is also taken into account. Results outperform previous studies on bathymetry estimation techniques, and allow to give indications on the optimal paths which can be adopted when planning data collection at sea

    Subspace Representations for Robust Face and Facial Expression Recognition

    Get PDF
    Analyzing human faces and modeling their variations have always been of interest to the computer vision community. Face analysis based on 2D intensity images is a challenging problem, complicated by variations in pose, lighting, blur, and non-rigid facial deformations due to facial expressions. Among the different sources of variation, facial expressions are of interest as important channels of non-verbal communication. Facial expression analysis is also affected by changes in view-point and inter-subject variations in performing different expressions. This dissertation makes an attempt to address some of the challenges involved in developing robust algorithms for face and facial expression recognition by exploiting the idea of proper subspace representations for data. Variations in the visual appearance of an object mostly arise due to changes in illumination and pose. So we first present a video-based sequential algorithm for estimating the face albedo as an illumination-insensitive signature for face recognition. We show that by knowing/estimating the pose of the face at each frame of a sequence, the albedo can be efficiently estimated using a Kalman filter. Then we extend this to the case of unknown pose by simultaneously tracking the pose as well as updating the albedo through an efficient Bayesian inference method performed using a Rao-Blackwellized particle filter. Since understanding the effects of blur, especially motion blur, is an important problem in unconstrained visual analysis, we then propose a blur-robust recognition algorithm for faces with spatially varying blur. We model a blurred face as a weighted average of geometrically transformed instances of its clean face. We then build a matrix, for each gallery face, whose column space spans the space of all the motion blurred images obtained from the clean face. This matrix representation is then used to define a proper objective function and perform blur-robust face recognition. To develop robust and generalizable models for expression analysis one needs to break the dependence of the models on the choice of the coordinate frame of the camera. To this end, we build models for expressions on the affine shape-space (Grassmann manifold), as an approximation to the projective shape-space, by using a Riemannian interpretation of deformations that facial expressions cause on different parts of the face. This representation enables us to perform various expression analysis and recognition algorithms without the need for pose normalization as a preprocessing step. There is a large degree of inter-subject variations in performing various expressions. This poses an important challenge on developing robust facial expression recognition algorithms. To address this challenge, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of action units (AUs). First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. We propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component, which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm, which benefits from the idea of sparsity and morphological diversity. The DCS algorithm uses the data-driven dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others
    • 

    corecore