7 research outputs found

    Sparse representation-based augmented multinomial logistic extreme learning machine with weighted composite features for spectral–spatial classification of hyperspectral images.

    Get PDF
    Although extreme learning machine (ELM) has successfully been applied to a number of pattern recognition problems, only with the original ELM it can hardly yield high accuracy for the classification of hyperspectral images (HSIs) due to two main drawbacks. The first is due to the randomly generated initial weights and bias, which cannot guarantee optimal output of ELM. The second is the lack of spatial information in the classifier as the conventional ELM only utilizes spectral information for classification of HSI. To tackle these two problems, a new framework for ELM-based spectral-spatial classification of HSI is proposed, where probabilistic modeling with sparse representation and weighted composite features (WCFs) is employed to derive the optimized output weights and extract spatial features. First, ELM is represented as a concave logarithmic-likelihood function under statistical modeling using the maximum a posteriori estimator. Second, sparse representation is applied to the Laplacian prior to efficiently determine a logarithmic posterior with a unique maximum in order to solve the ill-posed problem of ELM. The variable splitting and the augmented Lagrangian are subsequently used to further reduce the computation complexity of the proposed algorithm. Third, the spatial information is extracted using the WCFs to construct the spectral-spatial classification framework. In addition, the lower bound of the proposed method is derived by a rigorous mathematical proof. Experimental results on three publicly available HSI data sets demonstrate that the proposed methodology outperforms ELM and also a number of state-of-the-art approaches

    Extreme sparse multinomial logistic regression : a fast and robust framework for hyperspectral image classification

    Get PDF
    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework

    Spectral feature fusion networks with dual attention for hyperspectral image classification

    Get PDF
    Recent progress in spectral classification is largely attributed to the use of convolutional neural networks (CNN). While a variety of successful architectures have been proposed, they all extract spectral features from various portions of adjacent spectral bands. In this paper, we take a different approach and develop a deep spectral feature fusion method, which extracts both local and interlocal spectral features, capturing thus also the correlations among non-adjacent bands. To our knowledge, this is the first reported deep spectral feature fusion method. Our model is a two-stream architecture, where an intergroup and a groupwise spectral classifiers operate in parallel. The interlocal spectral correlation feature extraction is achieved elegantly, by reshaping the input spectral vectors to form the socalled non-adjacent spectral matrices. We introduce the concept of groupwise band convolution to enable efficient extraction of discriminative local features with multiple kernels adopting to the local spectral content. Another important contribution of this work is a novel dual-channel attention mechanism to identify the most informative spectral features. The model is trained in an end-to-end fashion with a joint loss. Experimental results on real data sets demonstrate excellent performance compared to the current state-of-the-art

    Locality Regularized Robust-PCRC: A Novel Simultaneous Feature Extraction and Classification Framework for Hyperspectral Images

    Get PDF
    Despite the successful applications of probabilistic collaborative representation classification (PCRC) in pattern classification, it still suffers from two challenges when being applied on hyperspectral images (HSIs) classification: 1) ineffective feature extraction in HSIs under noisy situation; and 2) lack of prior information for HSIs classification. To tackle the first problem existed in PCRC, we impose the sparse representation to PCRC, i.e., to replace the 2-norm with 1-norm for effective feature extraction under noisy condition. In order to utilize the prior information in HSIs, we first introduce the Euclidean distance (ED) between the training samples and the testing samples for the PCRC to improve the performance of PCRC. Then, we bring the coordinate information (CI) of the HSIs into the proposed model, which finally leads to the proposed locality regularized robust PCRC (LRR-PCRC). Experimental results show the proposed LRR-PCRC outperformed PCRC and other state-of-the-art pattern recognition and machine learning algorithms
    corecore