489 research outputs found
Kernel codebooks for scene categorization
This paper introduces a method for scene categorization by modeling ambiguity in the popular codebook approach. The codebook approach describes an image as a bag of discrete visual codewords, where the frequency distributions of these words are used for image categorization. There are two drawbacks to the traditional codebook model: codeword uncertainty and codeword plausibility. Both of these drawbacks stem from the hard assignment of visual features to a single codeword. We show that allowing a degree of ambiguity in assigning codewords improves categorization performance for three state-of-the-art datasets
A Review of Codebook Models in Patch-Based Visual Object Recognition
The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods
A Discriminative Representation of Convolutional Features for Indoor Scene Recognition
Indoor scene recognition is a multi-faceted and challenging problem due to
the diverse intra-class variations and the confusing inter-class similarities.
This paper presents a novel approach which exploits rich mid-level
convolutional features to categorize indoor scenes. Traditionally used
convolutional features preserve the global spatial structure, which is a
desirable property for general object recognition. However, we argue that this
structuredness is not much helpful when we have large variations in scene
layouts, e.g., in indoor scenes. We propose to transform the structured
convolutional activations to another highly discriminative feature space. The
representation in the transformed space not only incorporates the
discriminative aspects of the target dataset, but it also encodes the features
in terms of the general object categories that are present in indoor scenes. To
this end, we introduce a new large-scale dataset of 1300 object categories
which are commonly present in indoor scenes. Our proposed approach achieves a
significant performance boost over previous state of the art approaches on five
major scene classification datasets
Hybrid multi-layer Deep CNN/Aggregator feature for image classification
Deep Convolutional Neural Networks (DCNN) have established a remarkable
performance benchmark in the field of image classification, displacing
classical approaches based on hand-tailored aggregations of local descriptors.
Yet DCNNs impose high computational burdens both at training and at testing
time, and training them requires collecting and annotating large amounts of
training data. Supervised adaptation methods have been proposed in the
literature that partially re-learn a transferred DCNN structure from a new
target dataset. Yet these require expensive bounding-box annotations and are
still computationally expensive to learn. In this paper, we address these
shortcomings of DCNN adaptation schemes by proposing a hybrid approach that
combines conventional, unsupervised aggregators such as Bag-of-Words (BoW),
with the DCNN pipeline by treating the output of intermediate layers as densely
extracted local descriptors.
We test a variant of our approach that uses only intermediate DCNN layers on
the standard PASCAL VOC 2007 dataset and show performance significantly higher
than the standard BoW model and comparable to Fisher vector aggregation but
with a feature that is 150 times smaller. A second variant of our approach that
includes the fully connected DCNN layers significantly outperforms Fisher
vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC
2007, yet at only a small fraction of the training and testing cost.Comment: Accepted in ICASSP 2015 conference, 5 pages including reference, 4
figures and 2 table
- …