4,417 research outputs found
Confident Kernel Sparse Coding and Dictionary Learning
In recent years, kernel-based sparse coding (K-SRC) has received particular
attention due to its efficient representation of nonlinear data structures in
the feature space. Nevertheless, the existing K-SRC methods suffer from the
lack of consistency between their training and test optimization frameworks. In
this work, we propose a novel confident K-SRC and dictionary learning algorithm
(CKSC) which focuses on the discriminative reconstruction of the data based on
its representation in the kernel space. CKSC focuses on reconstructing each
data sample via weighted contributions which are confident in its corresponding
class of data. We employ novel discriminative terms to apply this scheme to
both training and test frameworks in our algorithm. This specific design
increases the consistency of these optimization frameworks and improves the
discriminative performance in the recall phase. In addition, CKSC directly
employs the supervised information in its dictionary learning framework to
enhance the discriminative structure of the dictionary. For empirical
evaluations, we implement our CKSC algorithm on multivariate time-series
benchmarks such as DynTex++ and UTKinect. Our claims regarding the superior
performance of the proposed algorithm are justified throughout comparing its
classification results to the state-of-the-art K-SRC algorithms.Comment: 10 pages, ICDM 2018 conferenc
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
Unsupervised spectral sub-feature learning for hyperspectral image classification
Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods
- …