45,518 research outputs found
Learning to detect video events from zero or very few video examples
In this work we deal with the problem of high-level event detection in video.
Specifically, we study the challenging problems of i) learning to detect video
events from solely a textual description of the event, without using any
positive video examples, and ii) additionally exploiting very few positive
training samples together with a small number of ``related'' videos. For
learning only from an event's textual description, we first identify a general
learning framework and then study the impact of different design choices for
various stages of this framework. For additionally learning from example
videos, when true positive training samples are scarce, we employ an extension
of the Support Vector Machine that allows us to exploit ``related'' event
videos by automatically introducing different weights for subsets of the videos
in the overall training set. Experimental evaluations performed on the
large-scale TRECVID MED 2014 video dataset provide insight on the effectiveness
of the proposed methods.Comment: Image and Vision Computing Journal, Elsevier, 2015, accepted for
publicatio
Content-based Video Retrieval
no abstract
Multi modal multi-semantic image retrieval
PhDThe rapid growth in the volume of visual information, e.g. image, and video can
overwhelm users’ ability to find and access the specific visual information of interest
to them. In recent years, ontology knowledge-based (KB) image information retrieval
techniques have been adopted into in order to attempt to extract knowledge from these
images, enhancing the retrieval performance. A KB framework is presented to
promote semi-automatic annotation and semantic image retrieval using multimodal
cues (visual features and text captions). In addition, a hierarchical structure for the KB
allows metadata to be shared that supports multi-semantics (polysemy) for concepts.
The framework builds up an effective knowledge base pertaining to a domain specific
image collection, e.g. sports, and is able to disambiguate and assign high level
semantics to ‘unannotated’ images.
Local feature analysis of visual content, namely using Scale Invariant Feature
Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’
model (BVW) as an effective method to represent visual content information and to
enhance its classification and retrieval. Local features are more useful than global
features, e.g. colour, shape or texture, as they are invariant to image scale, orientation
and camera angle. An innovative approach is proposed for the representation,
annotation and retrieval of visual content using a hybrid technique based upon the use
of an unstructured visual word and upon a (structured) hierarchical ontology KB
model. The structural model facilitates the disambiguation of unstructured visual
words and a more effective classification of visual content, compared to a vector
space model, through exploiting local conceptual structures and their relationships.
The key contributions of this framework in using local features for image
representation include: first, a method to generate visual words using the semantic
local adaptive clustering (SLAC) algorithm which takes term weight and spatial
locations of keypoints into account. Consequently, the semantic information is
preserved. Second a technique is used to detect the domain specific ‘non-informative
visual words’ which are ineffective at representing the content of visual data and
degrade its categorisation ability. Third, a method to combine an ontology model with
xi
a visual word model to resolve synonym (visual heterogeneity) and polysemy
problems, is proposed. The experimental results show that this approach can discover
semantically meaningful visual content descriptions and recognise specific events,
e.g., sports events, depicted in images efficiently.
Since discovering the semantics of an image is an extremely challenging problem, one
promising approach to enhance visual content interpretation is to use any associated
textual information that accompanies an image, as a cue to predict the meaning of an
image, by transforming this textual information into a structured annotation for an
image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct
types of information representation and modality, there are some strong, invariant,
implicit, connections between images and any accompanying text information.
Semantic analysis of image captions can be used by image retrieval systems to
retrieve selected images more precisely. To do this, a Natural Language Processing
(NLP) is exploited firstly in order to extract concepts from image captions. Next, an
ontology-based knowledge model is deployed in order to resolve natural language
ambiguities. To deal with the accompanying text information, two methods to extract
knowledge from textual information have been proposed. First, metadata can be
extracted automatically from text captions and restructured with respect to a semantic
model. Second, the use of LSI in relation to a domain-specific ontology-based
knowledge model enables the combined framework to tolerate ambiguities and
variations (incompleteness) of metadata. The use of the ontology-based knowledge
model allows the system to find indirectly relevant concepts in image captions and
thus leverage these to represent the semantics of images at a higher level.
Experimental results show that the proposed framework significantly enhances image
retrieval and leads to narrowing of the semantic gap between lower level machinederived
and higher level human-understandable conceptualisation
Ariadne's Thread - Interactive Navigation in a World of Networked Information
This work-in-progress paper introduces an interface for the interactive
visual exploration of the context of queries using the ArticleFirst database, a
product of OCLC. We describe a workflow which allows the user to browse live
entities associated with 65 million articles. In the on-line interface, each
query leads to a specific network representation of the most prevailing
entities: topics (words), authors, journals and Dewey decimal classes linked to
the set of terms in the query. This network represents the context of a query.
Each of the network nodes is clickable: by clicking through, a user traverses a
large space of articles along dimensions of authors, journals, Dewey classes
and words simultaneously. We present different use cases of such an interface.
This paper provides a link between the quest for maps of science and on-going
debates in HCI about the use of interactive information visualisation to
empower users in their search.Comment: CHI'15 Extended Abstracts, April 18-23, 2015, Seoul, Republic of
Korea. ACM 978-1-4503-3146-3/15/0
Contextualization of topics - browsing through terms, authors, journals and cluster allocations
This paper builds on an innovative Information Retrieval tool, Ariadne. The
tool has been developed as an interactive network visualization and browsing
tool for large-scale bibliographic databases. It basically allows to gain
insights into a topic by contextualizing a search query (Koopman et al., 2015).
In this paper, we apply the Ariadne tool to a far smaller dataset of 111,616
documents in astronomy and astrophysics. Labeled as the Berlin dataset, this
data have been used by several research teams to apply and later compare
different clustering algorithms. The quest for this team effort is how to
delineate topics. This paper contributes to this challenge in two different
ways. First, we produce one of the different cluster solution and second, we
use Ariadne (the method behind it, and the interface - called LittleAriadne) to
display cluster solutions of the different group members. By providing a tool
that allows the visual inspection of the similarity of article clusters
produced by different algorithms, we present a complementary approach to other
possible means of comparison. More particular, we discuss how we can - with
LittleAriadne - browse through the network of topical terms, authors, journals
and cluster solutions in the Berlin dataset and compare cluster solutions as
well as see their context.Comment: proceedings of the ISSI 2015 conference (accepted
TRECVid 2006 experiments at Dublin City University
In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2006. We submitted the following six automatic runs:
• F A 1 DCU-Base 6: Baseline run using only ASR/MT text features.
• F A 2 DCU-TextVisual 2: Run using text and visual features.
• F A 2 DCU-TextVisMotion 5: Run using text, visual, and motion features.
• F B 2 DCU-Visual-LSCOM 3: Text and visual features combined with concept detectors.
• F B 2 DCU-LSCOM-Filters 4: Text, visual, and motion features with concept detectors.
• F B 2 DCU-LSCOM-2 1: Text, visual, motion, and concept detectors with negative concepts.
The experiments were designed both to study the addition of motion features and separately constructed models for semantic concepts, to runs using only textual and visual features, as well as to establish a baseline for the manually-assisted search runs performed within the collaborative K-Space project and described in the corresponding TRECVid 2006 notebook paper. The results of
the experiments indicate that the performance of automatic search can be improved with suitable concept models. This, however, is very topic-dependent and the questions of when to include such models and which concept models should be included, remain unanswered. Secondly, using motion features did not lead to performance improvement in our experiments. Finally, it was observed that our text features, despite displaying a rather poor performance overall, may still be useful even for generic search topics
- …