56 research outputs found

    Matched filter stochastic background characterization for hyperspectral target detection

    Get PDF
    Algorithms exploiting hyperspectral imagery for target detection have continually evolved to provide improved detection results. Adaptive matched filters, which may be derived in many different scientific fields, can be used to locate spectral targets by modeling scene background as either structured geometric) with a set of endmembers (basis vectors) or as unstructured stochastic) with a covariance matrix. In unstructured background research, various methods of calculating the background covariance matrix have been developed, each involving either the removal of target signatures from the background model or the segmenting of image data into spatial or spectral subsets. The objective of these methods is to derive a background which matches the source of mixture interference for the detection of sub pixel targets, or matches the source of false alarms in the scene for the detection of fully resolved targets. In addition, these techniques increase the multivariate normality of the data from which the background is characterized, thus increasing adherence to the normality assumption inherent in the matched filter and ultimately improving target detection results. Such techniques for improved background characterization are widely practiced but not well documented or compared. This thesis will establish a strong theoretical foundation, describing the necessary preprocessing of hyperspectral imagery, deriving the spectral matched filter, and capturing current methods of unstructured background characterization. The extensive experimentation will allow for a comparative evaluation of several current unstructured background characterization methods as well as some new methods which improve stochastic modeling of the background. The results will show that consistent improvements over the scene-wide statistics can be achieved through spatial or spectral subsetting, and analysis of the results provides insight into the tradespaces of matching the interference, background multivariate normality and target exclusion for these techniques

    Spectral Target Detection using Physics-Based Modeling and a Manifold Learning Technique

    Get PDF
    Identification of materials from calibrated radiance data collected by an airborne imaging spectrometer depends strongly on the atmospheric and illumination conditions at the time of collection. This thesis demonstrates a methodology for identifying material spectra using the assumption that each unique material class forms a lower-dimensional manifold (surface) in the higher-dimensional spectral radiance space and that all image spectra reside on, or near, these theoretic manifolds. Using a physical model, a manifold characteristic of the target material exposed to varying illumination and atmospheric conditions is formed. A graph-based model is then applied to the radiance data to capture the intricate structure of each material manifold, followed by the application of the commute time distance (CTD) transformation to separate the target manifold from the background. Detection algorithms are then applied in the CTD subspace. This nonlinear transformation is based on a random walk on a graph and is derived from an eigendecomposition of the pseudoinverse of the graph Laplacian matrix. This work provides a geometric interpretation of the CTD transformation, its algebraic properties, the atmospheric and illumination parameters varied in the physics-based model, and the influence the target manifold samples have on the orientation of the coordinate axes in the transformed space. This thesis concludes by demonstrating improved detection results in the CTD subspace as compared to detection in the original spectral radiance space

    Distributed Spacing Stochastic Feature Selection and its Application to Textile Classification

    Get PDF
    Many situations require the need to quickly and accurately locate dismounted individuals in a variety of environments. In conjunction with other dismount detection techniques, being able to detect and classify clothing (textiles) provides a more comprehensive and complete dismount characterization capability. Because textile classification depends on distinguishing between different material types, hyperspectral data, which consists of several hundred spectral channels sampled from a continuous electromagnetic spectrum, is used as a data source. However, a hyperspectral image generates vast amounts of information and can be computationally intractable to analyze. A primary means to reduce the computational complexity is to use feature selection to identify a reduced set of features that effectively represents a specific class. While many feature selection methods exist, applying them to continuous data results in closely clustered feature sets that offer little redundancy and fail in the presence of noise. This dissertation presents a novel feature selection method that limits feature redundancy and improves classification. This method uses a stochastic search algorithm in conjunction with a heuristic that combines measures of distance and dependence to select features. Comparison testing between the presented feature selection method and existing methods uses hyperspectral data and image wavelet decompositions. The presented method produces feature sets with an average correlation of 0.40-0.54. This is significantly lower than the 0.70-0.99 of the existing feature selection methods. In terms of classification accuracy, the feature sets produced outperform those of other methods, to a significance of 0.025, and show greater robustness under noise representative of a hyperspectral imaging system

    Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data

    Get PDF
    Land-use classification from multiple data sources is an important problem in remote sensing. Data fusion algorithms like Semi-Supervised Manifold Alignment (SSMA) and Manifold Alignment with Schroedinger Eigenmaps (SEMA) use spectral and/or spatial features from multispectral, multimodal imagery to project each data source into a common latent space in which classification can be performed. However, in order for these algorithms to be well-posed, they require an expert user to either directly identify pairwise dissimilarities in the data or to identify class labels for a subset of points from which pairwise dissimilarities can be derived. In this paper, we propose a related data fusion technique, which we refer to as Semi-Supervised Normalized Embeddings (SSNE). SSNE is defined by modifying the SSMA/SEMA objective functions to incorporate an extra normalization term that enables a latent space to be well-defined even when no pairwise-dissimilarities are provided. Using publicly available data from the 2017 IEEE GRSS Data Fusion Contest, we show that SSNE enables similar land-use classification performance to SSMA/SEMA in scenarios where pairwise dissimilarities are available, but that unlike SSMA/SEMA, it also enables land-use classification in other scenarios. We compare the effect of applying different classification algorithms including a support vector machine (SVM), a linear discriminant analysis classifier (LDA), and a random forest classifier (RF); we show that SSMA/SEMA and SSNE robust to the use of different classifiers. In addition to comparing the classification performance of SSNE to SSMA/SEMA and comparing classification algorithm, we utilize manifold alignment to classify unknown views

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Cognitive Image Fusion and Assessment

    Get PDF

    Harmonic Analysis Inspired Data Fusion for Applications in Remote Sensing

    Get PDF
    This thesis will address the fusion of multiple data sources arising in remote sensing, such as hyperspectral and LIDAR. Fusing of multiple data sources provides better data representation and classification results than any of the independent data sources would alone. We begin our investigation with the well-studied Laplacian Eigenmap (LE) algorithm. This algorithm offers a rich template to which fusion concepts can be added. For each phase of the LE algorithm (graph, operator, and feature space) we develop and test different data fusion techniques. We also investigate how partially labeled data and approximate LE preimages can used to achieve data fusion. Lastly, we study several numerical acceleration techniques that can be used to augment the developed algorithms, namely the Nystrom extension, Random Projections, and Approximate Neighborhood constructions. The Nystrom extension is studied in detail and the application of Frame Theory and Sigma-Delta Quantization is proposed to enrich the Nystrom extension

    Spectral image utility for target detection applications

    Get PDF
    In a wide range of applications, images convey useful information about scenes. The “utility” of an image is defined with reference to the specific task that an observer seeks to accomplish, and differs from the “fidelity” of the image, which seeks to capture the ability of the image to represent the true nature of the scene. In remote sensing of the earth, various means of characterizing the utility of satellite and airborne imagery have evolved over the years. Recent advances in the imaging modality of spectral imaging have enabled synoptic views of the earth at many finely sampled wavelengths over a broad spectral band. These advances challenge the ability of traditional earth observation image utility metrics to describe the rich information content of spectral images. Traditional approaches to image utility that are based on overhead panchromatic image interpretability by a human observer are not applicable to spectral imagery, which requires automated processing. This research establishes the context for spectral image utility by reviewing traditional approaches and current methods for describing spectral image utility. It proposes a new approach to assessing and predicting spectral image utility for the specific application of target detection. We develop a novel approach to assessing the utility of any spectral image using the target-implant method. This method is not limited by the requirements of traditional target detection performance assessment, which need ground truth and an adequate number of target pixels in the scene. The flexibility of this approach is demonstrated by assessing the utility of a wide range of real and simulated spectral imagery over a variety ii of target detection scenarios. The assessed image utility may be summarized to any desired level of specificity based on the image analysis requirements. We also present an approach to predicting spectral image utility that derives statistical parameters directly from an image and uses them to model target detection algorithm output. The image-derived predicted utility is directly comparable to the assessed utility and the accuracy of prediction is shown to improve with statistical models that capture the non-Gaussian behavior of real spectral image target detection algorithm outputs. The sensitivity of the proposed spectral image utility metric to various image chain parameters is examined in detail, revealing characteristics, requirements, and limitations that provide insight into the relative importance of parameters in the image utility. The results of these investigations lead to a better understanding of spectral image information vis-à-vis target detection performance that will hopefully prove useful to the spectral imagery analysis community and represent a step towards quantifying the ability of a spectral image to satisfy information exploitation requirements

    Physics-constrained Hyperspectral Data Exploitation Across Diverse Atmospheric Scenarios

    Get PDF
    Hyperspectral target detection promises new operational advantages, with increasing instrument spectral resolution and robust material discrimination. Resolving surface materials requires a fast and accurate accounting of atmospheric effects to increase detection accuracy while minimizing false alarms. This dissertation investigates deep learning methods constrained by the processes governing radiative transfer to efficiently perform atmospheric compensation on data collected by long-wave infrared (LWIR) hyperspectral sensors. These compensation methods depend on generative modeling techniques and permutation invariant neural network architectures to predict LWIR spectral radiometric quantities. The compensation algorithms developed in this work were examined from the perspective of target detection performance using collected data. These deep learning-based compensation algorithms resulted in comparable detection performance to established methods while accelerating the image processing chain by 8X
    corecore