14,511 research outputs found
Bayes Merging of Multiple Vocabularies for Scalable Image Retrieval
The Bag-of-Words (BoW) representation is well applied to recent
state-of-the-art image retrieval works. Typically, multiple vocabularies are
generated to correct quantization artifacts and improve recall. However, this
routine is corrupted by vocabulary correlation, i.e., overlapping among
different vocabularies. Vocabulary correlation leads to an over-counting of the
indexed features in the overlapped area, or the intersection set, thus
compromising the retrieval accuracy. In order to address the correlation
problem while preserve the benefit of high recall, this paper proposes a Bayes
merging approach to down-weight the indexed features in the intersection set.
Through explicitly modeling the correlation problem in a probabilistic view, a
joint similarity on both image- and feature-level is estimated for the indexed
features in the intersection set.
We evaluate our method through extensive experiments on three benchmark
datasets. Albeit simple, Bayes merging can be well applied in various merging
tasks, and consistently improves the baselines on multi-vocabulary merging.
Moreover, Bayes merging is efficient in terms of both time and memory cost, and
yields competitive performance compared with the state-of-the-art methods.Comment: 8 pages, 7 figures, 6 tables, accepted to CVPR 201
Image retrieval with hierarchical matching pursuit
A novel representation of images for image retrieval is introduced in this
paper, by using a new type of feature with remarkable discriminative power.
Despite the multi-scale nature of objects, most existing models perform feature
extraction on a fixed scale, which will inevitably degrade the performance of
the whole system. Motivated by this, we introduce a hierarchical sparse coding
architecture for image retrieval to explore multi-scale cues. Sparse codes
extracted on lower layers are transmitted to higher layers recursively. With
this mechanism, cues from different scales are fused. Experiments on the
Holidays dataset show that the proposed method achieves an excellent retrieval
performance with a small code length.Comment: 5 pages, 6 figures, conferenc
Region-Based Image Retrieval Revisited
Region-based image retrieval (RBIR) technique is revisited. In early attempts
at RBIR in the late 90s, researchers found many ways to specify region-based
queries and spatial relationships; however, the way to characterize the
regions, such as by using color histograms, were very poor at that time. Here,
we revisit RBIR by incorporating semantic specification of objects and
intuitive specification of spatial relationships. Our contributions are the
following. First, to support multiple aspects of semantic object specification
(category, instance, and attribute), we propose a multitask CNN feature that
allows us to use deep learning technique and to jointly handle multi-aspect
object specification. Second, to help users specify spatial relationships among
objects in an intuitive way, we propose recommendation techniques of spatial
relationships. In particular, by mining the search results, a system can
recommend feasible spatial relationships among the objects. The system also can
recommend likely spatial relationships by assigned object category names based
on language prior. Moreover, object-level inverted indexing supports very fast
shortlist generation, and re-ranking based on spatial constraints provides
users with instant RBIR experiences.Comment: To appear in ACM Multimedia 2017 (Oral
CNN Features off-the-shelf: an Astounding Baseline for Recognition
Recent results indicate that the generic descriptors extracted from the
convolutional neural networks are very powerful. This paper adds to the
mounting evidence that this is indeed the case. We report on a series of
experiments conducted for different recognition tasks using the publicly
available code and model of the \overfeat network which was trained to perform
object classification on ILSVRC13. We use features extracted from the \overfeat
network as a generic image representation to tackle the diverse range of
recognition tasks of object image classification, scene recognition, fine
grained recognition, attribute detection and image retrieval applied to a
diverse set of datasets. We selected these tasks and datasets as they gradually
move further away from the original task and data the \overfeat network was
trained to solve. Astonishingly, we report consistent superior results compared
to the highly tuned state-of-the-art systems in all the visual classification
tasks on various datasets. For instance retrieval it consistently outperforms
low memory footprint methods except for sculptures dataset. The results are
achieved using a linear SVM classifier (or distance in case of retrieval)
applied to a feature representation of size 4096 extracted from a layer in the
net. The representations are further modified using simple augmentation
techniques e.g. jittering. The results strongly suggest that features obtained
from deep learning with convolutional nets should be the primary candidate in
most visual recognition tasks.Comment: version 3 revisions: 1)Added results using feature processing and
data augmentation 2)Referring to most recent efforts of using CNN for
different visual recognition tasks 3) updated text/captio
- …