11,072 research outputs found
Discriminative Dimensionality Reduction in Kernel Space
Schulz A, Hammer B. Discriminative Dimensionality Reduction in Kernel Space. In: ESANN2016 Proceedings. 24th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, Belgium,27-29 April 2016. i6doc.com; 2016
KCRC-LCD: Discriminative Kernel Collaborative Representation with Locality Constrained Dictionary for Visual Categorization
We consider the image classification problem via kernel collaborative
representation classification with locality constrained dictionary (KCRC-LCD).
Specifically, we propose a kernel collaborative representation classification
(KCRC) approach in which kernel method is used to improve the discrimination
ability of collaborative representation classification (CRC). We then measure
the similarities between the query and atoms in the global dictionary in order
to construct a locality constrained dictionary (LCD) for KCRC. In addition, we
discuss several similarity measure approaches in LCD and further present a
simple yet effective unified similarity measure whose superiority is validated
in experiments. There are several appealing aspects associated with LCD. First,
LCD can be nicely incorporated under the framework of KCRC. The LCD similarity
measure can be kernelized under KCRC, which theoretically links CRC and LCD
under the kernel method. Second, KCRC-LCD becomes more scalable to both the
training set size and the feature dimension. Example shows that KCRC is able to
perfectly classify data with certain distribution, while conventional CRC fails
completely. Comprehensive experiments on many public datasets also show that
KCRC-LCD is a robust discriminative classifier with both excellent performance
and good scalability, being comparable or outperforming many other
state-of-the-art approaches
Recommended from our members
Improving "bag-of-keypoints" image categorisation: Generative Models and PDF-Kernels
In this paper we propose two distinct enhancements to the basic
''bag-of-keypoints" image categorisation scheme proposed in [4]. In this
approach images are represented as a variable sized set of local image
features (keypoints). Thus, we require machine learning tools which
can operate on sets of vectors. In [4] this is achieved by representing
the set as a histogram over bins found by k-means. We show how this
approach can be improved and generalised using Gaussian Mixture Models
(GMMs). Alternatively, the set of keypoints can be represented directly
as a probability density function, over which a kernel can be de ned. This
approach is shown to give state of the art categorisation performance
- …