8,391 research outputs found
Contextual Media Retrieval Using Natural Language Queries
The widespread integration of cameras in hand-held and head-worn devices as
well as the ability to share content online enables a large and diverse visual
capture of the world that millions of users build up collectively every day. We
envision these images as well as associated meta information, such as GPS
coordinates and timestamps, to form a collective visual memory that can be
queried while automatically taking the ever-changing context of mobile users
into account. As a first step towards this vision, in this work we present
Xplore-M-Ego: a novel media retrieval system that allows users to query a
dynamic database of images and videos using spatio-temporal natural language
queries. We evaluate our system using a new dataset of real user queries as
well as through a usability study. One key finding is that there is a
considerable amount of inter-user variability, for example in the resolution of
spatial relations in natural language utterances. We show that our retrieval
system can cope with this variability using personalisation through an online
learning-based retrieval formulation.Comment: 8 pages, 9 figures, 1 tabl
Prediction of Search Targets From Fixations in Open-World Settings
Previous work on predicting the target of visual search from human fixations
only considered closed-world settings in which training labels are available
and predictions are performed for a known set of potential targets. In this
work we go beyond the state of the art by studying search target prediction in
an open-world setting in which we no longer assume that we have fixation data
to train for the search targets. We present a dataset containing fixation data
of 18 users searching for natural images from three image categories within
synthesised image collages of about 80 images. In a closed-world baseline
experiment we show that we can predict the correct target image out of a
candidate set of five images. We then present a new problem formulation for
search target prediction in the open-world setting that is based on learning
compatibilities between fixations and potential targets
A Hierarchical Recurrent Encoder-Decoder For Generative Context-Aware Query Suggestion
Users may strive to formulate an adequate textual query for their information
need. Search engines assist the users by presenting query suggestions. To
preserve the original search intent, suggestions should be context-aware and
account for the previous queries issued by the user. Achieving context
awareness is challenging due to data sparsity. We present a probabilistic
suggestion model that is able to account for sequences of previous queries of
arbitrary lengths. Our novel hierarchical recurrent encoder-decoder
architecture allows the model to be sensitive to the order of queries in the
context while avoiding data sparsity. Additionally, our model can suggest for
rare, or long-tail, queries. The produced suggestions are synthetic and are
sampled one word at a time, using computationally cheap decoding techniques.
This is in contrast to current synthetic suggestion models relying upon machine
learning pipelines and hand-engineered feature sets. Results show that it
outperforms existing context-aware approaches in a next query prediction
setting. In addition to query suggestion, our model is general enough to be
used in a variety of other applications.Comment: To appear in Conference of Information Knowledge and Management
(CIKM) 201
Recommended from our members
Improving tag recommendation using social networks
In this paper we address the task of recommending additional tags to partially annotated media objects, in our case images. We propose an extendable framework that can recommend tags using a combination of different personalised and collective contexts. We combine information from four contexts: (1) all the photos in the system, (2) a user's own photos, (3) the photos of a user's social contacts, and (4) the photos posted in the groups of which a user is a member. Variants of methods (1) and (2) have been proposed in previous work, but the use of (3) and (4) is novel.
For each of the contexts we use the same probabilistic model and Borda Count based aggregation approach to generate recommendations from different contexts into a unified ranking of recommended tags. We evaluate our system using a large set of real-world data from Flickr. We show that by using personalised contexts we can significantly improve tag recommendation compared to using collective knowledge alone. We also analyse our experimental results to explore the capabilities of our system with respect to a user's social behaviour
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
This paper tackles the problem of reading comprehension over long narratives
where documents easily span over thousands of tokens. We propose a curriculum
learning (CL) based Pointer-Generator framework for reading/sampling over large
documents, enabling diverse training of the neural model based on the notion of
alternating contextual difficulty. This can be interpreted as a form of domain
randomization and/or generative pretraining during training. To this end, the
usage of the Pointer-Generator softens the requirement of having the answer
within the context, enabling us to construct diverse training samples for
learning. Additionally, we propose a new Introspective Alignment Layer (IAL),
which reasons over decomposed alignments using block-based self-attention. We
evaluate our proposed method on the NarrativeQA reading comprehension
benchmark, achieving state-of-the-art performance, improving existing baselines
by relative improvement on BLEU-4 and relative improvement on
Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and
CL components.Comment: Accepted to ACL 201
- …