1,432 research outputs found

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection

    Full text link
    Hyperspectral anomaly detection (HAD) aims to recognize a minority of anomalies that are spectrally different from their surrounding background without prior knowledge. Deep neural networks (DNNs), including autoencoders (AEs), convolutional neural networks (CNNs) and vision transformers (ViTs), have shown remarkable performance in this field due to their powerful ability to model the complicated background. However, for reconstruction tasks, DNNs tend to incorporate both background and anomalies into the estimated background, which is referred to as the identical mapping problem (IMP) and leads to significantly decreased performance. To address this limitation, we propose a model-independent binary mask-guided separation training strategy for DNNs, named BiGSeT. Our method introduces a separation training loss based on a latent binary mask to separately constrain the background and anomalies in the estimated image. The background is preserved, while the potential anomalies are suppressed by using an efficient second-order Laplacian of Gaussian (LoG) operator, generating a pure background estimate. In order to maintain separability during training, we periodically update the mask using a robust proportion threshold estimated before the training. In our experiments, We adopt a vanilla AE as the network to validate our training strategy on several real-world datasets. Our results show superior performance compared to some state-of-the-art methods. Specifically, we achieved a 90.67% AUC score on the HyMap Cooke City dataset. Additionally, we applied our training strategy to other deep network structures, achieving improved detection performance compared to their original versions, demonstrating its effective transferability. The code of our method will be available at https://github.com/enter-i-username/BiGSeT.Comment: 13 pages, 13 figures, submitted to IEEE TRANSACTIONS ON IMAGE PROCESSIN

    Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extension to Robust Parameter Design

    Get PDF
    Standard anomaly detectors and classifiers assume data to be uncorrelated and homogeneous, which is not inherent in Hyperspectral Imagery (HSI). To address the detection difficulty, a new method termed Iterative Linear RX (ILRX) uses a line of pixels which shows an advantage over RX, in that it mitigates some of the effects of correlation due to spatial proximity; while the iterative adaptation from Iterative Linear RX (IRX) simultaneously eliminates outliers. In this research, the application of classification algorithms using anomaly detectors to remove potential anomalies from mean vector and covariance matrix estimates and addressing non-homogeneity through cluster analysis, both of which are often ignored when detecting or classifying anomalies, are shown to improve algorithm performance. Global anomaly detectors require the user to provide various parameters to analyze an image. These user-defined settings can be thought of as control variables and certain properties of the imagery can be employed as noise variables. The presence of these separate factors suggests the use of Robust Parameter Design (RPD) to locate optimal settings for an algorithm. This research extends the standard RPD model to include three factor interactions. These new models are then applied to the Autonomous Global Anomaly Detector (AutoGAD) to demonstrate improved setting combinations

    A Locally Adaptable Iterative RX Detector

    Get PDF
    We present an unsupervised anomaly detection method for hyperspectral imagery (HSI) based on data characteristics inherit in HSI. A locally adaptive technique of iteratively refining the well-known RX detector (LAIRX) is developed. The technique is motivated by the need for better first- and second-order statistic estimation via avoidance of anomaly presence. Overall, experiments show favorable Receiver Operating Characteristic (ROC) curves when compared to a global anomaly detector based upon the Support Vector Data Description (SVDD) algorithm, the conventional RX detector, and decomposed versions of the LAIRX detector. Furthermore, the utilization of parallel and distributed processing allows fast processing time making LAIRX applicable in an operational setting

    Models and Methods for Automated Background Density Estimation in Hyperspectral Anomaly Detection

    Get PDF
    Detecting targets with unknown spectral signatures in hyperspectral imagery has been proven to be a topic of great interest in several applications. Because no knowledge about the targets of interest is assumed, this task is performed by searching the image for anomalous pixels, i.e. those pixels deviating from a statistical model of the background. According to the hyperspectral literature, there are two main approaches to Anomaly Detection (AD) thus leading to the definition of different ways for background modeling: global and local. Global AD algorithms are designed to locate small rare objects that are anomalous with respect to the global background, identified by a large portion of the image. On the other hand, in local AD strategies, pixels with significantly different spectral features from a local neighborhood just surrounding the observed pixel are detected as anomalies. In this thesis work, a new scheme is proposed for detecting both global and local anomalies. Specifically, a simplified Likelihood Ratio Test (LRT) decision strategy is derived that involves thresholding the background log-likelihood and, thus, only needs the specification of the background Probability Density Function (PDF). Within this framework, the use of parametric, semi-parametric (in particular finite mixtures), and non-parametric models is investigated for the background PDF estimation. Although such approaches are well known and have been widely employed in multivariate data analysis, they have been seldom applied to estimate the hyperspectral background PDF, mostly due to the difficulty of reliably learning the model parameters without the need of operator intervention, which is highly desirable in practical AD tasks. In fact, this work represents the first attempt to jointly examine such methods in order to asses and discuss the most critical issues related to their employment for PDF estimation of hyperspectral background with specific reference to the detection of anomalous objects in a scene. Specifically, semi- and non-parametric estimators have been successfully employed to estimate the image background PDF with the aim of detecting global anomalies in a scene by means of the use of ad hoc learning procedures. In particular, strategies developed within a Bayesian framework have been considered for automatically estimating the parameters of mixture models and one of the most well-known non-parametric techniques, i.e. the fixed kernel density estimator (FKDE). In this latter, the performance and the modeling ability depend on scale parameters, called bandwidths. It has been shown that the use of bandwidths that are fixed across the entire feature space, as done in the FKDE, is not effective when the sample data exhibit different local peculiarities across the entire data domain, which generally occurs in practical applications. Therefore, some possibilities are investigated to improve the image background PDF estimation of FKDE by allowing the bandwidths to vary over the estimation domain, thus adapting the amount of smoothing to the local density of the data so as to more reliably and accurately follow the background data structure of hyperspectral images of a scene. The use of such variable bandwidth kernel density estimators (VKDE) is also proposed for estimating the background PDF within the considered AD scheme for detecting local anomalies. Such a choice is done with the aim to cope with the problem of non-Gaussian background for improving classical local AD algorithms involving parametric and non-parametric background models. The locally data-adaptive non-parametric model has been chosen since it encompasses the potential, typical of non-parametric PDF estimators, in modeling data regardless of specific distributional assumption together with the benefits deriving from the employment of bandwidths that vary across the data domain. The ability of the proposed AD scheme resulting from the application of different background PDF models and learning methods is experimentally evaluated by employing real hyperspectral images containing objects that are anomalous with respect to the background

    Low-Rank and Sparse Decomposition for Hyperspectral Image Enhancement and Clustering

    Get PDF
    In this dissertation, some new algorithms are developed for hyperspectral imaging analysis enhancement. Tensor data format is applied in hyperspectral dataset sparse and low-rank decomposition, which could enhance the classification and detection performance. And multi-view learning technique is applied in hyperspectral imaging clustering. Furthermore, kernel version of multi-view learning technique has been proposed, which could improve clustering performance. Most of low-rank and sparse decomposition algorithms are based on matrix data format for HSI analysis. As HSI contains high spectral dimensions, tensor based extended low-rank and sparse decomposition (TELRSD) is proposed in this dissertation for better performance of HSI classification with low-rank tensor part, and HSI detection with sparse tensor part. With this tensor based method, HSI is processed in 3D data format, and information between spectral bands and pixels maintain integrated during decomposition process. This proposed algorithm is compared with other state-of-art methods. And the experiment results show that TELRSD has the best performance among all those comparison algorithms. HSI clustering is an unsupervised task, which aims to group pixels into different groups without labeled information. Low-rank sparse subspace clustering (LRSSC) is the most popular algorithms for this clustering task. The spatial-spectral based multi-view low-rank sparse subspace clustering (SSMLC) algorithms is proposed in this dissertation, which extended LRSSC with multi-view learning technique. In this algorithm, spectral and spatial views are created to generate multi-view dataset of HSI, where spectral partition, morphological component analysis (MCA) and principle component analysis (PCA) are applied to create others views. Furthermore, kernel version of SSMLC (k-SSMLC) also has been investigated. The performance of SSMLC and k-SSMLC are compared with sparse subspace clustering (SSC), low-rank sparse subspace clustering (LRSSC), and spectral-spatial sparse subspace clustering (S4C). It has shown that SSMLC could improve the performance of LRSSC, and k-SSMLC has the best performance. The spectral clustering has been proved that it equivalent to non-negative matrix factorization (NMF) problem. In this case, NMF could be applied to the clustering problem. In order to include local and nonlinear features in data source, orthogonal NMF (ONMF), graph-regularized NMF (GNMF) and kernel NMF (k-NMF) has been proposed for better clustering performance. The non-linear orthogonal graph NMF combine both kernel, orthogonal and graph constraints in NMF (k-OGNMF), which push up the clustering performance further. In the HSI domain, kernel multi-view based orthogonal graph NMF (k-MOGNMF) is applied for subspace clustering, where k-OGNMF is extended with multi-view algorithm, and it has better performance and computation efficiency

    Reconstruction Error and Principal Component Based Anomaly Detection in Hyperspectral imagery

    Get PDF
    The rapid expansion of remote sensing and information collection capabilities demands methods to highlight interesting or anomalous patterns within an overabundance of data. This research addresses this issue for hyperspectral imagery (HSI). Two new reconstruction based HSI anomaly detectors are outlined: one using principal component analysis (PCA), and the other a form of non-linear PCA called logistic principal component analysis. Two very effective, yet relatively simple, modifications to the autonomous global anomaly detector are also presented, improving algorithm performance and enabling receiver operating characteristic analysis. A novel technique for HSI anomaly detection dubbed multiple PCA is introduced and found to perform as well or better than existing detectors on HYDICE data while using only linear deterministic methods. Finally, a response surface based optimization is performed on algorithm parameters such as to affect consistent desired algorithm performance

    Change Detection Using Landsat and Worldview Images

    Get PDF
    This paper presents some preliminary results using Landsat and Worldview images for change detection. The studied area had some significant changes such as construction of buildings between May 2014 and October 2015. We investigated several simple, practical, and effective approaches to change detection. For Landsat images, we first performed pansharpening to enhance the resolution to 15 meters. We then performed a chronochrome covariance equalization between two images. The residual between the two equalized images was then analyzed using several simple algorithms such as direct subtraction and global Reed-Xiaoli (GRX) detector. Experimental results using actual Landsat images clearly demonstrated that the proposed methods are effective. For Worldview images, we used pansharpened images with only four bands for change detection. The performance of the aforementioned algorithms is comparable to that of a commercial package developed by Digital Globe
    • …
    corecore