26 research outputs found

    Accelerating Relevance-Vector-Machine-Based Classification of Hyperspectral Image with Parallel Computing

    Get PDF
    Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM) could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously

    Adaptive Detection of Structured Signals in Low-Rank Interference

    Full text link
    In this paper, we consider the problem of detecting the presence (or absence) of an unknown but structured signal from the space-time outputs of an array under strong, non-white interference. Our motivation is the detection of a communication signal in jamming, where often the training portion is known but the data portion is not. We assume that the measurements are corrupted by additive white Gaussian noise of unknown variance and a few strong interferers, whose number, powers, and array responses are unknown. We also assume the desired signals array response is unknown. To address the detection problem, we propose several GLRT-based detection schemes that employ a probabilistic signal model and use the EM algorithm for likelihood maximization. Numerical experiments are presented to assess the performance of the proposed schemes

    Regularized estimation of linear functionals of precision matrices for high-dimensional time series

    Full text link
    This paper studies a Dantzig-selector type regularized estimator for linear functionals of high-dimensional linear processes. Explicit rates of convergence of the proposed estimator are obtained and they cover the broad regime from i.i.d. samples to long-range dependent time series and from sub-Gaussian innovations to those with mild polynomial moments. It is shown that the convergence rates depend on the degree of temporal dependence and the moment conditions of the underlying linear processes. The Dantzig-selector estimator is applied to the sparse Markowitz portfolio allocation and the optimal linear prediction for time series, in which the ratio consistency when compared with an oracle estimator is established. The effect of dependence and innovation moment conditions is further illustrated in the simulation study. Finally, the regularized estimator is applied to classify the cognitive states on a real fMRI dataset and to portfolio optimization on a financial dataset.Comment: 44 pages, 4 figure

    CLASS-BASED AFFINITY PROPAGATION FOR HYPERSPECTRAL IMAGE DIMENSIONALITY REDUCTION AND IMPROVEMENT OF MAXIMUM LIKELIHOOD CLASSIFICATION ACCURACY

    Get PDF
    This paper investigates an alternative classification method that integrates class-based affinity propagation (CAP) clustering algorithm and maximum likelihood classifier (MLC) with the purpose of overcome the MLC limitations in the classification of high dimensionality data, and thus improve its accuracy. The new classifier was named CAP-MLC, and comprises two approaches, spectral feature selection and image classification. CAP clustering algorithm was used to perform the image dimensionality reduction and feature selection while the MLC was employed for image classification. The performance of MLC in terms of classification accuracy and processing time is determined as a function of the selection rate achieved in the CAP clustering stage. The performance of CAP-MLC has been evaluated and validated using two hyperspectral scenes from the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) and the Hyperspectral Digital Imagery Collection Experiment (HYDICE). Classification results show that CAP-MLC observed an enormous improvement in accuracy, reaching 94.15% and 96.47% respectively for AVIRIS and HYDICE if compared with MLC, which had 85.42% and 81.50%. These values obtained by CAP-MLC improved the MLC classification accuracy in 8.73% and 14.97% for these images. The results also show that CAP-MLC performed well, even for classes with limited training samples, surpassing the limitations of MLC

    “Small Sample Size”: a methodological problem in bayes plug-in classifier for image recognition

    Get PDF
    New technologies in the form of improved instrumentation have made it possible to take detailed measurements over recognition patterns. This increase in the number of features or parameters for each pattern of interest not necessarily generates better classification performance. In fact, in problems where the number of training samples is less than the number of parameters, i.e. “small sample size” problems, not all parameters can be estimated and traditional classifiers often used to analyse lower dimensional data deteriorate. The Bayes plug-in classifier has been successfully applied to discriminate high dimensional data. This classifier is based on similarity measures that involve the inverse of the sample group covariance matrices. However, these matrices are singular in “small sample size” problems. Thus, several other methods of covariance estimation have been proposed where the sample group covariance estimate is replaced by covariance matrices of various forms. In this report, some of these approaches are reviewed and a new covariance estimator is proposed. The new estimator does not require an optimisation procedure, but an eigenvectoreigenvalue ordering process to select information from the projected sample group covariance matrices whenever possible and the pooled covariance otherwise. The effectiveness of the method is shown by some experimental results

    Hyperspectral Image Classification With Independent Component Discriminant Analysis

    Full text link

    IMPROVED STATISTICS ESTIMATION AND FEATURE EXTRACTION FOR HYPERSPECTRAL DATA CLASSIFICATION

    Get PDF
    For hyperspectral data classification, the avoidance of singularity of covariance estimates or excessive near singularity estimation error due to limited training data is a key problem. This study is intended to solve problem via regularized covariance estimators and feature extraction algorithms. A second purpose is to build a robust classification procedure with the advantages of the algorithms proposed in this study but robust in the sense of not requiring extensive analyst operator skill. A pair of covariance estimators called Mixed-LOOCs is proposed for avoiding excessive covariance estimator error. Mixed-LOOC2 has advantages over LOOC and BLOOC and needs less computation than those two. Based on Mixed-LOOC2, new DAFE and mixture classifier algorithms are proposed. Current feature extraction algorithms, while effective in some circumstances, have significant limitations. Discriminate analysis feature extraction (DAFE) is fast but does not perform well with classes whose mean values are similar, and it produces only N-1 reliable features where N is the number of classes. Decision Boundary Feature Extraction does not have these limitations but does not perform well when training sets are small, A new nonparametric feature extraction method (NWFE) is developed to solve the problems of DAFE and DBFE. NWFE takes advantage of the desirable characteristics of DAFE and DBFE, while avoiding their shortcomings. Finally, experimental results show that using NWFE features applied to a mixture classifier based on the Mixed-LOOC2 covariance estimator has the best performance and is a robust procedure for classifying hyperspectral data
    corecore