45 research outputs found
Hybrid multi-layer Deep CNN/Aggregator feature for image classification
Deep Convolutional Neural Networks (DCNN) have established a remarkable
performance benchmark in the field of image classification, displacing
classical approaches based on hand-tailored aggregations of local descriptors.
Yet DCNNs impose high computational burdens both at training and at testing
time, and training them requires collecting and annotating large amounts of
training data. Supervised adaptation methods have been proposed in the
literature that partially re-learn a transferred DCNN structure from a new
target dataset. Yet these require expensive bounding-box annotations and are
still computationally expensive to learn. In this paper, we address these
shortcomings of DCNN adaptation schemes by proposing a hybrid approach that
combines conventional, unsupervised aggregators such as Bag-of-Words (BoW),
with the DCNN pipeline by treating the output of intermediate layers as densely
extracted local descriptors.
We test a variant of our approach that uses only intermediate DCNN layers on
the standard PASCAL VOC 2007 dataset and show performance significantly higher
than the standard BoW model and comparable to Fisher vector aggregation but
with a feature that is 150 times smaller. A second variant of our approach that
includes the fully connected DCNN layers significantly outperforms Fisher
vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC
2007, yet at only a small fraction of the training and testing cost.Comment: Accepted in ICASSP 2015 conference, 5 pages including reference, 4
figures and 2 table
Two-Stream Convolutional Networks for Action Recognition in Videos
We investigate architectures of discriminatively trained deep Convolutional
Networks (ConvNets) for action recognition in video. The challenge is to
capture the complementary information on appearance from still frames and
motion between frames. We also aim to generalise the best performing
hand-crafted features within a data-driven learning framework.
Our contribution is three-fold. First, we propose a two-stream ConvNet
architecture which incorporates spatial and temporal networks. Second, we
demonstrate that a ConvNet trained on multi-frame dense optical flow is able to
achieve very good performance in spite of limited training data. Finally, we
show that multi-task learning, applied to two different action classification
datasets, can be used to increase the amount of training data and improve the
performance on both.
Our architecture is trained and evaluated on the standard video actions
benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of
the art. It also exceeds by a large margin previous attempts to use deep nets
for video classification
Multi-scale Orderless Pooling of Deep Convolutional Activation Features
Deep convolutional neural networks (CNN) have shown their promise as a
universal representation for recognition. However, global CNN activations lack
geometric invariance, which limits their robustness for classification and
matching of highly variable scenes. To improve the invariance of CNN
activations without degrading their discriminative power, this paper presents a
simple but effective scheme called multi-scale orderless pooling (MOP-CNN).
This scheme extracts CNN activations for local patches at multiple scale
levels, performs orderless VLAD pooling of these activations at each level
separately, and concatenates the result. The resulting MOP-CNN representation
can be used as a generic feature for either supervised or unsupervised
recognition tasks, from image classification to instance-level retrieval; it
consistently outperforms global CNN activations without requiring any joint
training of prediction layers for a particular target dataset. In absolute
terms, it achieves state-of-the-art results on the challenging SUN397 and MIT
Indoor Scenes classification datasets, and competitive results on
ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets