1,388 research outputs found
Fast ConvNets Using Group-wise Brain Damage
We revisit the idea of brain damage, i.e. the pruning of the coefficients of
a neural network, and suggest how brain damage can be modified and used to
speedup convolutional layers. The approach uses the fact that many efficient
implementations reduce generalized convolutions to matrix multiplications. The
suggested brain damage process prunes the convolutional kernel tensor in a
group-wise fashion by adding group-sparsity regularization to the standard
training process. After such group-wise pruning, convolutions can be reduced to
multiplications of thinned dense matrices, which leads to speedup. In the
comparison on AlexNet, the method achieves very competitive performance
Mutual Exclusivity Loss for Semi-Supervised Deep Learning
In this paper we consider the problem of semi-supervised learning with deep
Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated
on the observation that unlabeled data is cheap and can be used to improve the
accuracy of classifiers. In this paper we propose an unsupervised
regularization term that explicitly forces the classifier's prediction for
multiple classes to be mutually-exclusive and effectively guides the decision
boundary to lie on the low density space between the manifolds corresponding to
different classes of data. Our proposed approach is general and can be used
with any backpropagation-based learning method. We show through different
experiments that our method can improve the object recognition performance of
ConvNets using unlabeled data.Comment: 5 pages, 1 figures, ICIP 201
Exemplar Based Deep Discriminative and Shareable Feature Learning for Scene Image Classification
In order to encode the class correlation and class specific information in
image representation, we propose a new local feature learning approach named
Deep Discriminative and Shareable Feature Learning (DDSFL). DDSFL aims to
hierarchically learn feature transformation filter banks to transform raw pixel
image patches to features. The learned filter banks are expected to: (1) encode
common visual patterns of a flexible number of categories; (2) encode
discriminative information; and (3) hierarchically extract patterns at
different visual levels. Particularly, in each single layer of DDSFL, shareable
filters are jointly learned for classes which share the similar patterns.
Discriminative power of the filters is achieved by enforcing the features from
the same category to be close, while features from different categories to be
far away from each other. Furthermore, we also propose two exemplar selection
methods to iteratively select training data for more efficient and effective
learning. Based on the experimental results, DDSFL can achieve very promising
performance, and it also shows great complementary effect to the
state-of-the-art Caffe features.Comment: Pattern Recognition, Elsevier, 201
- …