21,451 research outputs found
Flexible unsupervised feature extraction for image classification
Dimensionality reduction is one of the fundamental and important topics in the fields of pattern recognition and machine learning. However, most existing dimensionality reduction methods aim to seek a projection matrix W such that the projection W T x is exactly equal to the true low-dimensional representation. In practice, this constraint is too rigid to well capture the geometric structure of data. To tackle this problem, we relax this constraint but use an elastic one on the projection with the aim to reveal the geometric structure of data. Based on this context, we propose an unsupervised dimensionality reduction model named flexible unsupervised feature extraction (FUFE) for image classification. Moreover, we theoretically prove that PCA and LPP, which are two of the most representative unsupervised dimensionality reduction models, are special cases of FUFE, and propose a non-iterative algorithm to solve it. Experiments on five real-world image databases show the effectiveness of the proposed model
Exemplar Based Deep Discriminative and Shareable Feature Learning for Scene Image Classification
In order to encode the class correlation and class specific information in
image representation, we propose a new local feature learning approach named
Deep Discriminative and Shareable Feature Learning (DDSFL). DDSFL aims to
hierarchically learn feature transformation filter banks to transform raw pixel
image patches to features. The learned filter banks are expected to: (1) encode
common visual patterns of a flexible number of categories; (2) encode
discriminative information; and (3) hierarchically extract patterns at
different visual levels. Particularly, in each single layer of DDSFL, shareable
filters are jointly learned for classes which share the similar patterns.
Discriminative power of the filters is achieved by enforcing the features from
the same category to be close, while features from different categories to be
far away from each other. Furthermore, we also propose two exemplar selection
methods to iteratively select training data for more efficient and effective
learning. Based on the experimental results, DDSFL can achieve very promising
performance, and it also shows great complementary effect to the
state-of-the-art Caffe features.Comment: Pattern Recognition, Elsevier, 201
- …