3,791 research outputs found
Multi-Image Semantic Matching by Mining Consistent Features
This work proposes a multi-image matching method to estimate semantic
correspondences across multiple images. In contrast to the previous methods
that optimize all pairwise correspondences, the proposed method identifies and
matches only a sparse set of reliable features in the image collection. In this
way, the proposed method is able to prune nonrepeatable features and also
highly scalable to handle thousands of images. We additionally propose a
low-rank constraint to ensure the geometric consistency of feature
correspondences over the whole image collection. Besides the competitive
performance on multi-graph matching and semantic flow benchmarks, we also
demonstrate the applicability of the proposed method for reconstructing
object-class models and discovering object-class landmarks from images without
using any annotation.Comment: CVPR 201
Location recognition over large time lags
Would it be possible to automatically associate ancient pictures to modern ones and create fancy cultural heritage city maps? We introduce here the task of recognizing the location depicted in an old photo given modern annotated images collected from the Internet. We present an extensive analysis on different features, looking for the most discriminative and most robust to the image variability induced by large time lags. Moreover, we show that the described task benefits from domain adaptation
K-Space at TRECVid 2007
In this paper we describe K-Space participation in
TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance.
The first of the two systems was a āshotā based interface,
where the results from a query were presented as a ranked
list of shots. The second interface was ābroadcastā based,
where results were presented as a ranked list of broadcasts.
Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
- ā¦