77 research outputs found
BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection
Hyperspectral anomaly detection (HAD) aims to recognize a minority of
anomalies that are spectrally different from their surrounding background
without prior knowledge. Deep neural networks (DNNs), including autoencoders
(AEs), convolutional neural networks (CNNs) and vision transformers (ViTs),
have shown remarkable performance in this field due to their powerful ability
to model the complicated background. However, for reconstruction tasks, DNNs
tend to incorporate both background and anomalies into the estimated
background, which is referred to as the identical mapping problem (IMP) and
leads to significantly decreased performance. To address this limitation, we
propose a model-independent binary mask-guided separation training strategy for
DNNs, named BiGSeT. Our method introduces a separation training loss based on a
latent binary mask to separately constrain the background and anomalies in the
estimated image. The background is preserved, while the potential anomalies are
suppressed by using an efficient second-order Laplacian of Gaussian (LoG)
operator, generating a pure background estimate. In order to maintain
separability during training, we periodically update the mask using a robust
proportion threshold estimated before the training. In our experiments, We
adopt a vanilla AE as the network to validate our training strategy on several
real-world datasets. Our results show superior performance compared to some
state-of-the-art methods. Specifically, we achieved a 90.67% AUC score on the
HyMap Cooke City dataset. Additionally, we applied our training strategy to
other deep network structures, achieving improved detection performance
compared to their original versions, demonstrating its effective
transferability. The code of our method will be available at
https://github.com/enter-i-username/BiGSeT.Comment: 13 pages, 13 figures, submitted to IEEE TRANSACTIONS ON IMAGE
PROCESSIN
Hyperspectral Image Analysis through Unsupervised Deep Learning
Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results
Physics-constrained Hyperspectral Data Exploitation Across Diverse Atmospheric Scenarios
Hyperspectral target detection promises new operational advantages, with increasing instrument spectral resolution and robust material discrimination. Resolving surface materials requires a fast and accurate accounting of atmospheric effects to increase detection accuracy while minimizing false alarms. This dissertation investigates deep learning methods constrained by the processes governing radiative transfer to efficiently perform atmospheric compensation on data collected by long-wave infrared (LWIR) hyperspectral sensors. These compensation methods depend on generative modeling techniques and permutation invariant neural network architectures to predict LWIR spectral radiometric quantities. The compensation algorithms developed in this work were examined from the perspective of target detection performance using collected data. These deep learning-based compensation algorithms resulted in comparable detection performance to established methods while accelerating the image processing chain by 8X
Sketched Multi-view Subspace Learning for Hyperspectral Anomalous Change Detection
In recent years, multi-view subspace learning has been garnering increasing
attention. It aims to capture the inner relationships of the data that are
collected from multiple sources by learning a unified representation. In this
way, comprehensive information from multiple views is shared and preserved for
the generalization processes. As a special branch of temporal series
hyperspectral image (HSI) processing, the anomalous change detection task
focuses on detecting very small changes among different temporal images.
However, when the volume of datasets is very large or the classes are
relatively comprehensive, existing methods may fail to find those changes
between the scenes, and end up with terrible detection results. In this paper,
inspired by the sketched representation and multi-view subspace learning, a
sketched multi-view subspace learning (SMSL) model is proposed for HSI
anomalous change detection. The proposed model preserves major information from
the image pairs and improves computational complexity by using a sketched
representation matrix. Furthermore, the differences between scenes are
extracted by utilizing the specific regularizer of the self-representation
matrices. To evaluate the detection effectiveness of the proposed SMSL model,
experiments are conducted on a benchmark hyperspectral remote sensing dataset
and a natural hyperspectral dataset, and compared with other state-of-the art
approaches
Learnable Reconstruction Methods from RGB Images to Hyperspectral Imaging: A Survey
Hyperspectral imaging enables versatile applications due to its competence in
capturing abundant spatial and spectral information, which are crucial for
identifying substances. However, the devices for acquiring hyperspectral images
are expensive and complicated. Therefore, many alternative spectral imaging
methods have been proposed by directly reconstructing the hyperspectral
information from lower-cost, more available RGB images. We present a thorough
investigation of these state-of-the-art spectral reconstruction methods from
the widespread RGB images. A systematic study and comparison of more than 25
methods has revealed that most of the data-driven deep learning methods are
superior to prior-based methods in terms of reconstruction accuracy and quality
despite lower speeds. This comprehensive review can serve as a fruitful
reference source for peer researchers, thus further inspiring future
development directions in related domains
Efficient Nonlinear Dimensionality Reduction for Pixel-wise Classification of Hyperspectral Imagery
Classification, target detection, and compression are all important tasks in analyzing hyperspectral imagery (HSI). Because of the high dimensionality of HSI, it is often useful to identify low-dimensional representations of HSI data that can be used to make analysis tasks tractable. Traditional linear dimensionality reduction (DR) methods are not adequate due to the nonlinear distribution of HSI data. Many nonlinear DR methods, which are successful in the general data processing domain, such as Local Linear Embedding (LLE) [1], Isometric Feature Mapping (ISOMAP) [2] and Kernel Principal Components Analysis (KPCA) [3], run very slowly and require large amounts of memory when applied to HSI. For example, applying KPCA to the 512×217 pixel, 204-band Salinas image using a modern desktop computer (AMD FX-6300 Six-Core Processor, 32 GB memory) requires more than 5 days of computing time and 28GB memory!
In this thesis, we propose two different algorithms for significantly improving the computational efficiency of nonlinear DR without adversely affecting the performance of classification task: Simple Linear Iterative Clustering (SLIC) superpixels and semi-supervised deep autoencoder networks (SSDAN). SLIC is a very popular algorithm developed for computing superpixels in RGB images that can easily be extended to HSI. Each superpixel includes hundreds or thousands of pixels based on spatial and spectral similarities and is represented by the mean spectrum and spatial position of all of its component pixels. Since the number of superpixels is much smaller than the number of pixels in the image, they can be used as input for nonlinearDR, which significantly reduces the required computation time and memory versus providing all of the original pixels as input. After nonlinear DR is performed using superpixels as input, an interpolation step can be used to obtain the embedding of each original image pixel in the low dimensional space. To illustrate the power of using superpixels in an HSI classification pipeline,we conduct experiments on three widely used and publicly available hyperspectral images: Indian Pines, Salinas and Pavia. The experimental results for all three images demonstrate that for moderately sized superpixels, the overall accuracy of classification using superpixel-based nonlinear DR matches and sometimes exceeds the overall accuracy of classification using pixel-based nonlinear DR, with a computational speed that is two-three orders of magnitude faster.
Even though superpixel-based nonlinear DR shows promise for HSI classification, it does have disadvantages. First, it is costly to perform out-of-sample extensions. Second, it does not generalize to handle other types of data that might not have spatial information. Third, the original input pixels cannot approximately be recovered, as is possible in many DR algorithms.In order to overcome these difficulties, a new autoencoder network - SSDAN is proposed. It is a fully-connected semi-supervised autoencoder network that performs nonlinear DR in a manner that enables class information to be integrated. Features learned from SSDAN will be similar to those computed via traditional nonlinear DR, and features from the same class will be close to each other. Once the network is trained well with training data, test data can be easily mapped to the low dimensional embedding. Any kind of data can be used to train a SSDAN,and the decoder portion of the SSDAN can easily recover the initial input with reasonable loss.Experimental results on pixel-based classification in the Indian Pines, Salinas and Pavia images show that SSDANs can approximate the overall accuracy of nonlinear DR while significantly improving computational efficiency. We also show that transfer learning can be use to finetune features of a trained SSDAN for a new HSI dataset. Finally, experimental results on HSI compression show a trade-off between Overall Accuracy (OA) of extracted features and PeakSignal to Noise Ratio (PSNR) of the reconstructed image
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
Inference in supervised spectral classifiers for on-board hyperspectral imaging: An overview
Machine learning techniques are widely used for pixel-wise classification of hyperspectral images. These methods can achieve high accuracy, but most of them are computationally intensive models. This poses a problem for their implementation in low-power and embedded systems intended for on-board processing, in which energy consumption and model size are as important as accuracy. With a focus on embedded anci on-board systems (in which only the inference step is performed after an off-line training process), in this paper we provide a comprehensive overview of the inference properties of the most relevant techniques for hyperspectral image classification. For this purpose, we compare the size of the trained models and the operations required during the inference step (which are directly related to the hardware and energy requirements). Our goal is to search for appropriate trade-offs between on-board implementation (such as model size anci energy consumption) anci classification accuracy
Application of Multi-Sensor Fusion Technology in Target Detection and Recognition
Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems
- …