827 research outputs found

    Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related

    Full text link
    This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise---including its identification of the (unknown) number of endmembers---under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution

    Full text link
    In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018, Spotlight

    Using Lidar to geometrically-constrain signature spaces for physics-based target detection

    Get PDF
    A fundamental task when performing target detection on spectral imagery is ensuring that a target signature is in the same metric domain as the measured spectral data set. Remotely sensed data are typically collected in digital counts and calibrated to radiance. That is, calibrated data have units of spectral radiance, while target signatures in the visible regime are commonly characterized in units of re°ectance. A necessary precursor to running a target detection algorithm is converting the measured scene data and target signature to the same domain. Atmospheric inversion or compensation is a well-known method for transforming mea- sured scene radiance values into the re°ectance domain. While this method may be math- ematically trivial, it is computationally attractive and is most e®ective when illumination conditions are constant across a scene. However, when illumination conditions are not con- stant for a given scene, signi¯cant error may be introduced when applying the same linear inversion globally. In contrast to the inversion methodology, physics-based forward modeling approaches aim to predict the possible ways that a target might appear in a scene using atmospheric and radiometric models. To fully encompass possible target variability due to changing illumination levels, a target vector space is created. In addition to accounting for varying illumination, physics-based model approaches have a distinct advantage in that they can also incorporate target variability due to a variety of other sources, to include adjacency target orientation, and mixed pixels. Increasing the variability of the target vector space may be beneficial in a global sense in that it may allow for the detection of difficult targets, such as shadowed or partially concealed targets. However, it should also be noted that expansion of the target space may introduce unnecessary confusion for a given pixel. Furthermore, traditional physics-based approaches make certain assumptions which may be prudent only when passive, spectral data for a scene are available. Common examples include the assumption of a °at ground plane and pure target pixels. Many of these assumptions may be attributed to the lack of three-dimensional (3D) spatial information for the scene. In the event that 3D spatial information were available, certain assumptions could be levied, allowing accurate geometric information to be fed to the physics-based model on a pixel- by-pixel basis. Doing so may e®ectively constrain the physics-based model, resulting in a pixel-specific target space with optimized variability and minimized confusion. This body of work explores using spatial information from a topographic Light Detection and Ranging (Lidar) system as a means to enhance the delity of physics-based models for spectral target detection. The incorporation of subpixel spatial information, relative to a hyperspectral image (HSI) pixel, provides valuable insight about plausible geometric con¯gurations of a target, background, and illumination sources within a scene. Methods for estimating local geometry on a per-pixel basis are introduced; this spatial information is then fed into a physics-based model to the forward prediction of a target in radiance space. The target detection performance based on this spatially-enhanced, spectral target space is assessed relative to current state-of-the-art spectral algorithms

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    Optimized Hyperspectral Imagery Anomaly Detection Through Robust Parameter Design

    Get PDF
    Anomaly detection algorithms for hyperspectral imagery (HSI) are an important first step in the analysis chain which can reduce the overall amount of data to be processed. The actual amount of data reduced depends greatly on the accuracy of the anomaly detection algorithm implemented. Most, if not all, anomaly detection algorithms require a user to identify some initial parameters. These parameters (or controls) affect overall algorithm performance. Regardless of the anomaly detector being utilized, algorithm performance is often negatively impacted by uncontrollable noise factors which introduce additional variance into the process. In the case of HSI, the noise variables are embedded in the image under consideration. Robust parameter design (RPD) offers a method to model the controls as well as the noise variables and identify robust parameters. This research identifies image noise characteristics necessary to perform RPD on HSI. Additionally, a small sample training and test algorithm is presented. Finally, the standard RPD model is extended to consider higher order noise coefficients. Mean and variance RPD models are optimized in a dual response function suggested by Lin and Tu. Results are presented from simulations and two anomaly detection algorithms, the Reed-Xiaoli anomaly detector and the autonomous global anomaly detector

    Methods for Generating High-Fidelity Trace Chemical Residue Reflectance Signatures for Active Spectroscopy Classification Applications

    Get PDF
    Standoff detection and identification of trace chemicals in hyperspectral infrared images is an enabling capability in a variety of applications relevant to defense, law enforcement, and intelligence communities. Performance of these methods is impacted by the spectral signature variability due to the presence of contaminants, surface roughness, nonlinear effects, etc. Though multiple classes of algorithms exist for the detection and classification of these signatures, they are limited by the availability of relevant reference datasets. In this work, we first address the lack of physics-based models that can accurately predict trace chemical spectra. Most available models assume that the chemical takes the form of spherical particles or uniform thin films. A more realistic chemical presentation that could be encountered is that of a non-uniform chemical film that is deposited after evaporation of the solvent which contained the chemical. This research presents an improved signature model for this type of solid film. The proposed model, called sparse transfer matrix (STM), includes a log-normal distribution of film thicknesses and is found to reduce the root-mean-square error between simulated and measured data by about 25% when compared with either the particle or uniform thin film models. When applied to measured data, the sparse transfer matrix model provides a 0.10-0.28 increase in classification accuracy over traditional models. There remain limitations in the STM model which prevent the predicted spectra from being well-matched to the measured data in some cases. To overcome this, we leverage the field of domain adaptation to translate data from the simulated to the measured data domain. This thesis presents the first one-dimensional (1D) conditional generative adversarial network (GAN) to perform spectrum-to-spectrum translation of reflectance signatures. We apply the 1D conditional GAN to a library of simulated spectra and quantify the improvement with the translated library. The method demonstrates an increase in overall classification accuracy to 0.723 from the accuracy of 0.622 achieved using the STM model when tested on real data. However, the performance improvement is biased towards data included in the GAN training set. The next phase of the research focuses on learning models that are more robust to different parameter combinations for which we do not have measured data. This part of the research leverages elements from the field of theory-guided data science. Specifically, we develop a physics-guided neural network (PGNN) for predicting chemical reflectance for a set of parameterized inputs that is more accurate than the state-of-the-art physics-based signature model for chemical residues. After training the PGNN, we use it to generate a library of predicted spectra for training a classifier. We compare the classification accuracy when using this PGNN library versus a library generated by the physics-based model. Using the PGNN, the average classification accuracy increases to 0.813 on real chemical reflectance data, including data from chemicals not included in the PGNN training set. The products of this thesis work include methods for producing realistic trace chemical residue reflectance signatures as well as demonstrations of improved performance in active spectroscopy classification applications. These methods provide great value to a range of scientific communities. The novel STM signature model enables existing spectroscopy sensors and algorithms to perform well on real-world problems where chemical contaminants are non-uniform. The 1D conditional GAN is the first of its kind and can be applied to many other 1D datasets, such as audio and other time-series data. Finally, the application of theory-guided data science to the trace chemical problem not only enhances the quality of results for known targets and backgrounds, but also increases the robustness to new targets
    corecore