13,872 research outputs found
K-Space at TRECVid 2007
In this paper we describe K-Space participation in
TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance.
The first of the two systems was a āshotā based interface,
where the results from a query were presented as a ranked
list of shots. The second interface was ābroadcastā based,
where results were presented as a ranked list of broadcasts.
Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
The TREC2001 video track: information retrieval on digital video information
The development of techniques to support content-based access to archives of digital video information has recently started to receive much attention from the research community. During 2001, the annual TREC activity, which has been benchmarking the performance of information retrieval techniques on a range of media for 10 years, included a ātrackā or activity which allowed investigation into approaches to support searching through a video library. This paper is not intended to provide a comprehensive picture of the different approaches taken by the TREC2001 video track participants but instead we give an overview of the TREC video search task and a thumbnail sketch of the approaches taken by different groups. The reason for writing this paper is to highlight the message from the TREC video track that there are now a variety of approaches available for searching and browsing through digital video archives, that these approaches do work, are scalable to larger archives and can yield useful retrieval performance for users. This has important implications in making digital libraries of video information attainable
Utilising semantic technologies for intelligent indexing and retrieval of digital images
The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they in principle rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this paper we present a semantically-enabled image annotation and retrieval engine that is designed to satisfy the requirements of the commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as the exploitation of lexical databases for explicit semantic-based query expansion
A semantic feature for human motion retrieval
With the explosive growth of motion capture data, it becomes very imperative in animation production to have an efficient search engine to retrieve motions from large motion repository. However, because of the high dimension of data space and complexity of matching methods, most of the existing approaches cannot return the result in real time. This paper proposes a high level semantic feature in a low dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experiment results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly. Copyright Ā© 2013 John Wiley & Sons, Ltd
A framework for improving the performance of verification algorithms with a low false positive rate requirement and limited training data
In this paper we address the problem of matching patterns in the so-called
verification setting in which a novel, query pattern is verified against a
single training pattern: the decision sought is whether the two match (i.e.
belong to the same class) or not. Unlike previous work which has universally
focused on the development of more discriminative distance functions between
patterns, here we consider the equally important and pervasive task of
selecting a distance threshold which fits a particular operational requirement
- specifically, the target false positive rate (FPR). First, we argue on
theoretical grounds that a data-driven approach is inherently ill-conditioned
when the desired FPR is low, because by the very nature of the challenge only a
small portion of training data affects or is affected by the desired threshold.
This leads us to propose a general, statistical model-based method instead. Our
approach is based on the interpretation of an inter-pattern distance as
implicitly defining a pattern embedding which approximately distributes
patterns according to an isotropic multi-variate normal distribution in some
space. This interpretation is then used to show that the distribution of
training inter-pattern distances is the non-central chi2 distribution,
differently parameterized for each class. Thus, to make the class-specific
threshold choice we propose a novel analysis-by-synthesis iterative algorithm
which estimates the three free parameters of the model (for each class) using
task-specific constraints. The validity of the premises of our work and the
effectiveness of the proposed method are demonstrated by applying the method to
the task of set-based face verification on a large database of pseudo-random
head motion videos.Comment: IEEE/IAPR International Joint Conference on Biometrics, 201
Semantic levels of domain-independent commonsense knowledgebase for visual indexing and retrieval applications
Building intelligent tools for searching, indexing and retrieval applications is needed to congregate the rapidly increasing amount of visual data. This raised the need for building and maintaining ontologies and knowledgebases to support textual semantic representation of visual contents, which is an important block in these applications. This paper proposes a commonsense knowledgebase that forms the link between the visual world and its semantic textual representation. This domain-independent knowledge is provided at different levels of semantics by a fully automated engine that analyses, fuses and integrates previous commonsense knowledgebases. This knowledgebase satisfies the levels of semantic by adding two new levels: temporal event scenarios and psycholinguistic understanding. Statistical properties and an experiment evaluation, show coherency and effectiveness of the proposed knowledgebase in providing the knowledge needed for wide-domain visual applications
Learning to Hash-tag Videos with Tag2Vec
User-given tags or labels are valuable resources for semantic understanding
of visual media such as images and videos. Recently, a new type of labeling
mechanism known as hash-tags have become increasingly popular on social media
sites. In this paper, we study the problem of generating relevant and useful
hash-tags for short video clips. Traditional data-driven approaches for tag
enrichment and recommendation use direct visual similarity for label transfer
and propagation. We attempt to learn a direct low-cost mapping from video to
hash-tags using a two step training process. We first employ a natural language
processing (NLP) technique, skip-gram models with neural network training to
learn a low-dimensional vector representation of hash-tags (Tag2Vec) using a
corpus of 10 million hash-tags. We then train an embedding function to map
video features to the low-dimensional Tag2vec space. We learn this embedding
for 29 categories of short video clips with hash-tags. A query video without
any tag-information can then be directly mapped to the vector space of tags
using the learned embedding and relevant tags can be found by performing a
simple nearest-neighbor retrieval in the Tag2Vec space. We validate the
relevance of the tags suggested by our system qualitatively and quantitatively
with a user study
Image Retrieval Using Circular Hidden Markov Models with a Garbage State
Shape-based image and video retrieval is an active research topic in multimedia information retrieval. It is well known that there are significant variations in shapes of the same category extracted from images and videos. In this paper, we propose to use circular hidden Markov models for shape recognition and image retrieval. In our approach, we use a garbage state to explicitly deal with shape mismatch caused by shape deformation and occlusion. We will propose a modiĀÆed circular hidden Markov model (HMM)for shape-based image retrieval and then use circular HMMs with a garbage state to further improve the performance. To evaluate the proposed algorithms, we have conducted experiments using the database of the MPEG-7 Core Experiments Shape-1, Part B. The experiments show that our approaches are robust to shape deformations such as shape variations and occlusion. The performance of our approaches is comparable to that of the state-of-the-art shape-based image retrieval systems in terms of accuracy and speed
- ā¦