14 research outputs found

    Mathematical methods for anomaly grouping in hyperspectral images

    Get PDF
    The topological anomaly detection (TAD) algorithm differs from other anomaly detection algorithms in that it does not rely on the data\u27s being normally distributed. We have built on this advantage of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. We have done this by identifying and integrating clusters of anomalous pixels, which we accomplished with a graph-theoretical method that combines spatial and spectral information. By applying our method, the Anomaly Clustering algorithm, to hyperspectral images, we have found that our method integrates small clusters of anomalous pixels, such as those corresponding to rooftops, into single anomalies; this improves visualization and interpretation of objects. We have also performed a local linear embedding (LLE) analysis of the TAD results to illustrate its application as a means of grouping anomalies together. By performing the LLE algorithm on just the anomalies identified by the TAD algorithm, we drastically reduce the amount of computation needed for the computationally-heavy LLE algorithm. We also propose an application of a shifted QR algorithm to improve the speed of the LLE algorithm

    Harmonic Analysis Inspired Data Fusion for Applications in Remote Sensing

    Get PDF
    This thesis will address the fusion of multiple data sources arising in remote sensing, such as hyperspectral and LIDAR. Fusing of multiple data sources provides better data representation and classification results than any of the independent data sources would alone. We begin our investigation with the well-studied Laplacian Eigenmap (LE) algorithm. This algorithm offers a rich template to which fusion concepts can be added. For each phase of the LE algorithm (graph, operator, and feature space) we develop and test different data fusion techniques. We also investigate how partially labeled data and approximate LE preimages can used to achieve data fusion. Lastly, we study several numerical acceleration techniques that can be used to augment the developed algorithms, namely the Nystrom extension, Random Projections, and Approximate Neighborhood constructions. The Nystrom extension is studied in detail and the application of Frame Theory and Sigma-Delta Quantization is proposed to enrich the Nystrom extension

    Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction

    Full text link
    Hyperspectral measurements from long range sensors can give a detailed picture of the items, materials, and chemicals in a scene but analysis can be difficult, slow, and expensive due to high spatial and spectral resolutions of state-of-the-art sensors. As such, sparsity is important to enable the future of spectral compression and analytics. It has been observed that environmental and atmospheric effects, including scattering, can produce nonlinear effects posing challenges for existing source separation and compression methods. We present a novel transformation into Hilbert spaces for pruning and constructing sparse representations via non-negative least squares minimization. Then we introduce max likelihood compression vectors to decrease information loss. Our approach is benchmarked against standard pruning and least squares as well as deep learning methods. Our methods are evaluated in terms of overall spectral reconstruction error and compression rate using real and synthetic data. We find that pruning least squares methods converge quickly unlike matching pursuit methods. We find that Hilbert space pruning can reduce error by as much as 40% of the error of standard pruning and also outperform neural network autoencoders

    In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

    Full text link
    It is often said that a deep learning model is "invariant" to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways in which they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allows us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks: data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.Comment: To appear at NeurIPS 202

    Path-Based Dictionary Augmentation: A Framework for Improving kk -Sparse Image Processing

    No full text
    corecore