426 research outputs found

    Physics-constrained Hyperspectral Data Exploitation Across Diverse Atmospheric Scenarios

    Get PDF
    Hyperspectral target detection promises new operational advantages, with increasing instrument spectral resolution and robust material discrimination. Resolving surface materials requires a fast and accurate accounting of atmospheric effects to increase detection accuracy while minimizing false alarms. This dissertation investigates deep learning methods constrained by the processes governing radiative transfer to efficiently perform atmospheric compensation on data collected by long-wave infrared (LWIR) hyperspectral sensors. These compensation methods depend on generative modeling techniques and permutation invariant neural network architectures to predict LWIR spectral radiometric quantities. The compensation algorithms developed in this work were examined from the perspective of target detection performance using collected data. These deep learning-based compensation algorithms resulted in comparable detection performance to established methods while accelerating the image processing chain by 8X

    Multimodal Representation Learning and Set Attention for LWIR In-Scene Atmospheric Compensation

    Get PDF
    A multimodal generative modeling approach combined with permutation-invariant set attention is investigated in this paper to support long-wave infrared (LWIR) in-scene atmospheric compensation. The generative model can produce realistic atmospheric state vectors (T;H2O;O3) and their corresponding transmittance, upwelling radiance, and downwelling radiance (TUD) vectors by sampling a low-dimensional space. Variational loss, LWIR radiative transfer loss and atmospheric state loss constrain the low-dimensional space, resulting in lower reconstruction error compared to standard mean-squared error approaches. A permutation-invariant network predicts the generative model low-dimensional components from in-scene data, allowing for simultaneous estimates of the atmospheric state and TUD vector. Forward modeling the predicted atmospheric state vector results in a second atmospheric compensation estimate. Results are reported for collected LWIR data and compared to Fast Line-of-Sight Atmospheric Analysis of Hypercubes - Infrared (FLAASH-IR), demonstrating commensurate performance when applied to a target detection scenario. Additionally, an approximate 8 times reduction in detection time is realized using this neural network-based algorithm compared to FLAASH-IR. Accelerating the target detection pipeline while providing multiple atmospheric estimates is necessary for many real-world, time sensitive tasks

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    Efficient Nonlinear Dimensionality Reduction for Pixel-wise Classification of Hyperspectral Imagery

    Get PDF
    Classification, target detection, and compression are all important tasks in analyzing hyperspectral imagery (HSI). Because of the high dimensionality of HSI, it is often useful to identify low-dimensional representations of HSI data that can be used to make analysis tasks tractable. Traditional linear dimensionality reduction (DR) methods are not adequate due to the nonlinear distribution of HSI data. Many nonlinear DR methods, which are successful in the general data processing domain, such as Local Linear Embedding (LLE) [1], Isometric Feature Mapping (ISOMAP) [2] and Kernel Principal Components Analysis (KPCA) [3], run very slowly and require large amounts of memory when applied to HSI. For example, applying KPCA to the 512×217 pixel, 204-band Salinas image using a modern desktop computer (AMD FX-6300 Six-Core Processor, 32 GB memory) requires more than 5 days of computing time and 28GB memory! In this thesis, we propose two different algorithms for significantly improving the computational efficiency of nonlinear DR without adversely affecting the performance of classification task: Simple Linear Iterative Clustering (SLIC) superpixels and semi-supervised deep autoencoder networks (SSDAN). SLIC is a very popular algorithm developed for computing superpixels in RGB images that can easily be extended to HSI. Each superpixel includes hundreds or thousands of pixels based on spatial and spectral similarities and is represented by the mean spectrum and spatial position of all of its component pixels. Since the number of superpixels is much smaller than the number of pixels in the image, they can be used as input for nonlinearDR, which significantly reduces the required computation time and memory versus providing all of the original pixels as input. After nonlinear DR is performed using superpixels as input, an interpolation step can be used to obtain the embedding of each original image pixel in the low dimensional space. To illustrate the power of using superpixels in an HSI classification pipeline,we conduct experiments on three widely used and publicly available hyperspectral images: Indian Pines, Salinas and Pavia. The experimental results for all three images demonstrate that for moderately sized superpixels, the overall accuracy of classification using superpixel-based nonlinear DR matches and sometimes exceeds the overall accuracy of classification using pixel-based nonlinear DR, with a computational speed that is two-three orders of magnitude faster. Even though superpixel-based nonlinear DR shows promise for HSI classification, it does have disadvantages. First, it is costly to perform out-of-sample extensions. Second, it does not generalize to handle other types of data that might not have spatial information. Third, the original input pixels cannot approximately be recovered, as is possible in many DR algorithms.In order to overcome these difficulties, a new autoencoder network - SSDAN is proposed. It is a fully-connected semi-supervised autoencoder network that performs nonlinear DR in a manner that enables class information to be integrated. Features learned from SSDAN will be similar to those computed via traditional nonlinear DR, and features from the same class will be close to each other. Once the network is trained well with training data, test data can be easily mapped to the low dimensional embedding. Any kind of data can be used to train a SSDAN,and the decoder portion of the SSDAN can easily recover the initial input with reasonable loss.Experimental results on pixel-based classification in the Indian Pines, Salinas and Pavia images show that SSDANs can approximate the overall accuracy of nonlinear DR while significantly improving computational efficiency. We also show that transfer learning can be use to finetune features of a trained SSDAN for a new HSI dataset. Finally, experimental results on HSI compression show a trade-off between Overall Accuracy (OA) of extracted features and PeakSignal to Noise Ratio (PSNR) of the reconstructed image

    Spectral Target Detection using Physics-Based Modeling and a Manifold Learning Technique

    Get PDF
    Identification of materials from calibrated radiance data collected by an airborne imaging spectrometer depends strongly on the atmospheric and illumination conditions at the time of collection. This thesis demonstrates a methodology for identifying material spectra using the assumption that each unique material class forms a lower-dimensional manifold (surface) in the higher-dimensional spectral radiance space and that all image spectra reside on, or near, these theoretic manifolds. Using a physical model, a manifold characteristic of the target material exposed to varying illumination and atmospheric conditions is formed. A graph-based model is then applied to the radiance data to capture the intricate structure of each material manifold, followed by the application of the commute time distance (CTD) transformation to separate the target manifold from the background. Detection algorithms are then applied in the CTD subspace. This nonlinear transformation is based on a random walk on a graph and is derived from an eigendecomposition of the pseudoinverse of the graph Laplacian matrix. This work provides a geometric interpretation of the CTD transformation, its algebraic properties, the atmospheric and illumination parameters varied in the physics-based model, and the influence the target manifold samples have on the orientation of the coordinate axes in the transformed space. This thesis concludes by demonstrating improved detection results in the CTD subspace as compared to detection in the original spectral radiance space

    Target Detection in a Structured Background Environment Using an Infeasibility Metric in an Invariant Space

    Get PDF
    This paper develops a hybrid target detector that incorporates structured backgrounds and physics based modeling together with a geometric infeasibility metric. More often than not, detection algorithms are usually applied to atmospherically compensated hyperspectral imagery. Rather than compensate the imagery, we take the opposite approach by using a physics based model to generate permutations of what the target might look like as seen by the sensor in radiance space. The development and status of such a method is presented as applied to the generation of target spaces. The generated target spaces are designed to fully encompass image target pixels while using a limited number of input model parameters. Background spaces are modeled using a linear subspace (structured) approach characterized by endmembers found by using the maximum distance method (MaxD). After augmenting the image data with the target space, 15 endmembers were found, which were not related to the target (i.e., background endmembers). A geometric infeasibility metric is developed which enables one to be more selective in rejecting false alarms. Preliminary results in the design of such a metric show that an orthogonal projection operator based on target space vectors can distinguish between target and background pixels. Furthermore, when used in conjunction with an operator that produces abundance-like values, we obtained separation between target, ackground, and anomalous pixels. This approach was applied to HYDICE image spectrometer data

    Hyperspectral Image Analysis through Unsupervised Deep Learning

    Get PDF
    Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results

    Reconstruction Error and Principal Component Based Anomaly Detection in Hyperspectral imagery

    Get PDF
    The rapid expansion of remote sensing and information collection capabilities demands methods to highlight interesting or anomalous patterns within an overabundance of data. This research addresses this issue for hyperspectral imagery (HSI). Two new reconstruction based HSI anomaly detectors are outlined: one using principal component analysis (PCA), and the other a form of non-linear PCA called logistic principal component analysis. Two very effective, yet relatively simple, modifications to the autonomous global anomaly detector are also presented, improving algorithm performance and enabling receiver operating characteristic analysis. A novel technique for HSI anomaly detection dubbed multiple PCA is introduced and found to perform as well or better than existing detectors on HYDICE data while using only linear deterministic methods. Finally, a response surface based optimization is performed on algorithm parameters such as to affect consistent desired algorithm performance
    • …
    corecore