77 research outputs found

    A new bandwidth selection criterion for using SVDD to analyze hyperspectral data

    Full text link
    This paper presents a method for hyperspectral image classification that uses support vector data description (SVDD) with the Gaussian kernel function. SVDD has been a popular machine learning technique for single-class classification, but selecting the proper Gaussian kernel bandwidth to achieve the best classification performance is always a challenging problem. This paper proposes a new automatic, unsupervised Gaussian kernel bandwidth selection approach which is used with a multiclass SVDD classification scheme. The performance of the multiclass SVDD classification scheme is evaluated on three frequently used hyperspectral data sets, and preliminary results show that the proposed method can achieve better performance than published results on these data sets

    A Locally Adaptable Iterative RX Detector

    Get PDF
    We present an unsupervised anomaly detection method for hyperspectral imagery (HSI) based on data characteristics inherit in HSI. A locally adaptive technique of iteratively refining the well-known RX detector (LAIRX) is developed. The technique is motivated by the need for better first- and second-order statistic estimation via avoidance of anomaly presence. Overall, experiments show favorable Receiver Operating Characteristic (ROC) curves when compared to a global anomaly detector based upon the Support Vector Data Description (SVDD) algorithm, the conventional RX detector, and decomposed versions of the LAIRX detector. Furthermore, the utilization of parallel and distributed processing allows fast processing time making LAIRX applicable in an operational setting

    Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extension to Robust Parameter Design

    Get PDF
    Standard anomaly detectors and classifiers assume data to be uncorrelated and homogeneous, which is not inherent in Hyperspectral Imagery (HSI). To address the detection difficulty, a new method termed Iterative Linear RX (ILRX) uses a line of pixels which shows an advantage over RX, in that it mitigates some of the effects of correlation due to spatial proximity; while the iterative adaptation from Iterative Linear RX (IRX) simultaneously eliminates outliers. In this research, the application of classification algorithms using anomaly detectors to remove potential anomalies from mean vector and covariance matrix estimates and addressing non-homogeneity through cluster analysis, both of which are often ignored when detecting or classifying anomalies, are shown to improve algorithm performance. Global anomaly detectors require the user to provide various parameters to analyze an image. These user-defined settings can be thought of as control variables and certain properties of the imagery can be employed as noise variables. The presence of these separate factors suggests the use of Robust Parameter Design (RPD) to locate optimal settings for an algorithm. This research extends the standard RPD model to include three factor interactions. These new models are then applied to the Autonomous Global Anomaly Detector (AutoGAD) to demonstrate improved setting combinations

    GAN-based Hyperspectral Anomaly Detection

    Full text link
    In this paper, we propose a generative adversarial network (GAN)-based hyperspectral anomaly detection algorithm. In the proposed algorithm, we train a GAN model to generate a synthetic background image which is close to the original background image as much as possible. By subtracting the synthetic image from the original one, we are able to remove the background from the hyperspectral image. Anomaly detection is performed by applying Reed-Xiaoli (RX) anomaly detector (AD) on the spectral difference image. In the experimental part, we compare our proposed method with the classical RX, Weighted-RX (WRX) and support vector data description (SVDD)-based anomaly detectors and deep autoencoder anomaly detection (DAEAD) method on synthetic and real hyperspectral images. The detection results show that our proposed algorithm outperforms the other methods in the benchmark.Comment: 5 page

    Clustering Hyperspectral Imagery for Improved Adaptive Matched Filter Performance

    Get PDF
    This paper offers improvements to adaptive matched filter (AMF) performance by addressing correlation and non-homogeneity problems inherent to hyperspectral imagery (HSI). The estimation of the mean vector and covariance matrix of the background should be calculated using “target-free” data. This statement reflects the difficulty that including target data in estimates of the mean vector and covariance matrix of the background could entail. This data could act as statistical outliers and severely contaminate the estimators. This fact serves as the impetus for a 2-stage process: First, attempt to remove the target data from the background by way of the employment of anomaly detectors. Next, with remaining data being relatively “target-free” the way is cleared for signature matching. Relative to the first stage, we were able to test seven different anomaly detectors, some of which are designed specifically to deal with the spatial correlation of HSI data and/or the presence of anomalous pixels in local or global mean and covariance estimators. Relative to the second stage, we investigated the use of cluster analytic methods to boost AMF performance. The research shows that accounting for spatial correlation effects in the detector yields nearly “target-free” data for use in an AMF that is greatly benefitted through the use of cluster analysis methods

    Reconstruction Error and Principal Component Based Anomaly Detection in Hyperspectral imagery

    Get PDF
    The rapid expansion of remote sensing and information collection capabilities demands methods to highlight interesting or anomalous patterns within an overabundance of data. This research addresses this issue for hyperspectral imagery (HSI). Two new reconstruction based HSI anomaly detectors are outlined: one using principal component analysis (PCA), and the other a form of non-linear PCA called logistic principal component analysis. Two very effective, yet relatively simple, modifications to the autonomous global anomaly detector are also presented, improving algorithm performance and enabling receiver operating characteristic analysis. A novel technique for HSI anomaly detection dubbed multiple PCA is introduced and found to perform as well or better than existing detectors on HYDICE data while using only linear deterministic methods. Finally, a response surface based optimization is performed on algorithm parameters such as to affect consistent desired algorithm performance

    Kernel-Based Framework for Multitemporal and Multisource Remote Sensing Data Classification and Change Detection

    Get PDF
    The multitemporal classification of remote sensing images is a challenging problem, in which the efficient combination of different sources of information (e.g., temporal, contextual, or multisensor) can improve the results. In this paper, we present a general framework based on kernel methods for the integration of heterogeneous sources of information. Using the theoretical principles in this framework, three main contributions are presented. First, a novel family of kernel-based methods for multitemporal classification of remote sensing images is presented. The second contribution is the development of nonlinear kernel classifiers for the well-known difference and ratioing change detection methods by formulating them in an adequate high-dimensional feature space. Finally, the presented methodology allows the integration of contextual information and multisensor images with different levels of nonlinear sophistication. The binary support vector (SV) classifier and the one-class SV domain description classifier are evaluated by using both linear and nonlinear kernel functions. Good performance on synthetic and real multitemporal classification scenarios illustrates the generalization of the framework and the capabilities of the proposed algorithms.Publicad

    Novel Pattern Recognition Techniques for Improved Target Detection in Hyperspectral Imagery

    Get PDF
    A fundamental challenge in target detection in hyperspectral imagery is spectral variability. In target detection applications, we are provided with a pure target signature; we do not have a collection of samples that characterize the spectral variability of the target. Another problem is that the performance of stochastic detection algorithms such as the spectral matched filter can be detrimentally affected by the assumptions of multivariate normality of the data, which are often violated in practical situations. We address the challenge of lack of training samples by creating two models to characterize the target class spectral variability --the first model makes no assumptions regarding inter-band correlation, while the second model uses a first-order Markovbased scheme to exploit correlation between bands. Using these models, we present two techniques for meeting these challenges-the kernel-based support vector data description (SVDD) and spectral fringe-adjusted joint transform correlation (SFJTC). We have developed an algorithm that uses the kernel-based SVDD for use in full-pixel target detection scenarios. We have addressed optimization of the SVDD kernel-width parameter using the golden-section search algorithm for unconstrained optimization. We investigated a proper number of signatures N to generate for the SVDD target class and found that only a small number of training samples is required relative to the dimensionality (number of bands). We have extended decision-level fusion techniques using the majority vote rule for the purpose of alleviating the problem of selecting a proper value of s 2 for either of our target variability models. We have shown that heavy spectral variability may cause SFJTC-based detection to suffer and have addressed this by developing an algorithm that selects an optimal combination of the discrete wavelet transform (DWT) coefficients of the signatures for use as features for detection. For most scenarios, our results show that our SVDD-based detection scheme provides low false positive rates while maintaining higher true positive rates than popular stochastic detection algorithms. Our results also show that our SFJTC-based detection scheme using the DWT coefficients can yield significant detection improvement compared to use of SFJTC using the original signatures and traditional stochastic and deterministic algorithms

    Mapping the distribution of invasive tree species using deep one-class classification in the tropical montane landscape of Kenya

    Get PDF
    Some invasive tree species threaten biodiversity and cause irreversible damage to global ecosystems. The key to controlling and monitoring the propagation of invasive tree species is to detect their occurrence as early as possible. In this regard, one-class classification (OCC) shows potential in forest areas with abundant species richness since it only requires a few positive samples of the invasive tree species to be mapped, instead of all the species. However, the classical OCC method in remote sensing is heavily dependent on manually designed features, which have a limited ability in areas with complex species distributions. Deep learning based tree species classification methods mostly focus on multi-class classification, and there have been few studies of the deep OCC of tree species. In this paper, a deep positive and unlabeled learning based OCC framework—ITreeDet—is proposed for identifying the invasive tree species of Eucalyptus spp. (eucalyptus) and Acacia mearnsii (black wattle) in the Taita Hills of southern Kenya. In the ITreeDet framework, an absNegative risk estimator is designed to train a robust deep OCC model by fully using the massive unlabeled data. Compared with the state-of-the-art OCC methods, ITreeDet represents a great improvement in detection accuracy, and the F1-score was 0.86 and 0.70 for eucalyptus and black wattle, respectively. The study area covers 100 km2 of the Taita Hills, where, according to our findings, the total area of eucalyptus and black wattle is 1.61 km2 and 3.24 km2, respectively, which represent 6.78% and 13.65% of the area covered by trees and forest. In addition, both invasive tree species are located in the higher elevations, and the extensive spread of black wattle around the study area confirms its invasive tendency. The maps generated by the use of the proposed algorithm will help local government to develop management strategies for these two invasive species.Peer reviewe
    corecore