20,129 research outputs found
Bag-of-Features Image Indexing and Classification in Microsoft SQL Server Relational Database
This paper presents a novel relational database architecture aimed to visual
objects classification and retrieval. The framework is based on the
bag-of-features image representation model combined with the Support Vector
Machine classification and is integrated in a Microsoft SQL Server database.Comment: 2015 IEEE 2nd International Conference on Cybernetics (CYBCONF),
Gdynia, Poland, 24-26 June 201
LIFER 2.0: discovering personal lifelog insights using an interactive lifelog retrieval system
This paper describes the participation of the Organiser Team in the ImageCLEFlifelog 2019 Solve My Life Puzzle (Puzzle) and Lifelog Moment Retrieval (LMRT) tasks. We proposed to use LIFER 2.0, an enhanced version of LIFER, which was an interactive retrieval system for personal lifelog data. We utilised LIFER 2.0 with some additional visual features, obtained by using traditional visual bag-of-words, to solve the Puzzle task, while with the LMRT, we applied LIFER 2.0 only with the provided information. The results on both tasks confirmed that by using faceted filter and context browsing, a user can gain insights from their personal lifelog by employing very simple interactions. These results also serve as baselines for other approaches in the ImageCLEFlifelog 2019 challenge to compare with
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data
Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to deal with multimodal data, such as in image annotation
tasks. Another popular approach to model the multimodal data is through deep
neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type
of topic model called the Document Neural Autoregressive Distribution Estimator
(DocNADE) was proposed and demonstrated state-of-the-art performance for text
document modeling. In this work, we show how to successfully apply and extend
this model to multimodal data, such as simultaneous image classification and
annotation. First, we propose SupDocNADE, a supervised extension of DocNADE,
that increases the discriminative power of the learned hidden topic features
and show how to employ it to learn a joint representation from image visual
words, annotation words and class label information. We test our model on the
LabelMe and UIUC-Sports data sets and show that it compares favorably to other
topic models. Second, we propose a deep extension of our model and provide an
efficient way of training the deep model. Experimental results show that our
deep model outperforms its shallow version and reaches state-of-the-art
performance on the Multimedia Information Retrieval (MIR) Flickr data set.Comment: 24 pages, 10 figures. A version has been accepted by TPAMI on Aug
4th, 2015. Add footnote about how to train the model in practice in Section
5.1. arXiv admin note: substantial text overlap with arXiv:1305.530
Using Apache Lucene to Search Vector of Locally Aggregated Descriptors
Surrogate Text Representation (STR) is a profitable solution to efficient
similarity search on metric space using conventional text search engines, such
as Apache Lucene. This technique is based on comparing the permutations of some
reference objects in place of the original metric distance. However, the
Achilles heel of STR approach is the need to reorder the result set of the
search according to the metric distance. This forces to use a support database
to store the original objects, which requires efficient random I/O on a fast
secondary memory (such as flash-based storages). In this paper, we propose to
extend the Surrogate Text Representation to specifically address a class of
visual metric objects known as Vector of Locally Aggregated Descriptors (VLAD).
This approach is based on representing the individual sub-vectors forming the
VLAD vector with the STR, providing a finer representation of the vector and
enabling us to get rid of the reordering phase. The experiments on a publicly
available dataset show that the extended STR outperforms the baseline STR
achieving satisfactory performance near to the one obtained with the original
VLAD vectors.Comment: In Proceedings of the 11th Joint Conference on Computer Vision,
Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) -
Volume 4: VISAPP, p. 383-39
- …