3,487 research outputs found
Locality and Structure Regularized Low Rank Representation for Hyperspectral Image Classification
Hyperspectral image (HSI) classification, which aims to assign an accurate
label for hyperspectral pixels, has drawn great interest in recent years.
Although low rank representation (LRR) has been used to classify HSI, its
ability to segment each class from the whole HSI data has not been exploited
fully yet. LRR has a good capacity to capture the underlying lowdimensional
subspaces embedded in original data. However, there are still two drawbacks for
LRR. First, LRR does not consider the local geometric structure within data,
which makes the local correlation among neighboring data easily ignored.
Second, the representation obtained by solving LRR is not discriminative enough
to separate different data. In this paper, a novel locality and structure
regularized low rank representation (LSLRR) model is proposed for HSI
classification. To overcome the above limitations, we present locality
constraint criterion (LCC) and structure preserving strategy (SPS) to improve
the classical LRR. Specifically, we introduce a new distance metric, which
combines both spatial and spectral features, to explore the local similarity of
pixels. Thus, the global and local structures of HSI data can be exploited
sufficiently. Besides, we propose a structure constraint to make the
representation have a near block-diagonal structure. This helps to determine
the final classification labels directly. Extensive experiments have been
conducted on three popular HSI datasets. And the experimental results
demonstrate that the proposed LSLRR outperforms other state-of-the-art methods.Comment: 14 pages, 7 figures, TGRS201
KCRC-LCD: Discriminative Kernel Collaborative Representation with Locality Constrained Dictionary for Visual Categorization
We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
- …