317 research outputs found

    Joint sparse model-based discriminative K-SVD for hyperspectral image classification

    Get PDF
    Sparse representation classification (SRC) is being widely investigated on hyperspectral images (HSI). For SRC methods to achieve high classification performance, not only is the development of sparse representation models essential, the designing and learning of quality dictionaries also plays an important role. That is, a redundant dictionary with well-designated atoms is required in order to ensure low reconstruction error, high discriminative power, and stable sparsity. In this paper, we propose a new method to learn such dictionaries for HSI classification. We borrow the concept of joint sparse model (JSM) from SRC to dictionary learning. JSM assumes local smoothness and joint sparsity and was initially proposed for classification of HSI. We leverage JSM to develop an extension of discriminative K-SVD for learning a promising discriminative dictionary for HSI. Through a semi-supervised strategy, the new dictionary learning method, termed JSM-DKSVD, utilises all spectrums over the local neighbourhoods of labelled training pixels for discriminative dictionary learning. It can produce a redundant dictionary with rich spectral and spatial information as well as high discriminative power. The learned dictionary can then be compatibly used in conjunction with the established SRC methods, and can significantly improve their performance for HSI classification

    Locality and Structure Regularized Low Rank Representation for Hyperspectral Image Classification

    Full text link
    Hyperspectral image (HSI) classification, which aims to assign an accurate label for hyperspectral pixels, has drawn great interest in recent years. Although low rank representation (LRR) has been used to classify HSI, its ability to segment each class from the whole HSI data has not been exploited fully yet. LRR has a good capacity to capture the underlying lowdimensional subspaces embedded in original data. However, there are still two drawbacks for LRR. First, LRR does not consider the local geometric structure within data, which makes the local correlation among neighboring data easily ignored. Second, the representation obtained by solving LRR is not discriminative enough to separate different data. In this paper, a novel locality and structure regularized low rank representation (LSLRR) model is proposed for HSI classification. To overcome the above limitations, we present locality constraint criterion (LCC) and structure preserving strategy (SPS) to improve the classical LRR. Specifically, we introduce a new distance metric, which combines both spatial and spectral features, to explore the local similarity of pixels. Thus, the global and local structures of HSI data can be exploited sufficiently. Besides, we propose a structure constraint to make the representation have a near block-diagonal structure. This helps to determine the final classification labels directly. Extensive experiments have been conducted on three popular HSI datasets. And the experimental results demonstrate that the proposed LSLRR outperforms other state-of-the-art methods.Comment: 14 pages, 7 figures, TGRS201

    KCRC-LCD: Discriminative Kernel Collaborative Representation with Locality Constrained Dictionary for Visual Categorization

    Full text link
    We consider the image classification problem via kernel collaborative representation classification with locality constrained dictionary (KCRC-LCD). Specifically, we propose a kernel collaborative representation classification (KCRC) approach in which kernel method is used to improve the discrimination ability of collaborative representation classification (CRC). We then measure the similarities between the query and atoms in the global dictionary in order to construct a locality constrained dictionary (LCD) for KCRC. In addition, we discuss several similarity measure approaches in LCD and further present a simple yet effective unified similarity measure whose superiority is validated in experiments. There are several appealing aspects associated with LCD. First, LCD can be nicely incorporated under the framework of KCRC. The LCD similarity measure can be kernelized under KCRC, which theoretically links CRC and LCD under the kernel method. Second, KCRC-LCD becomes more scalable to both the training set size and the feature dimension. Example shows that KCRC is able to perfectly classify data with certain distribution, while conventional CRC fails completely. Comprehensive experiments on many public datasets also show that KCRC-LCD is a robust discriminative classifier with both excellent performance and good scalability, being comparable or outperforming many other state-of-the-art approaches
    • …
    corecore