795 research outputs found
Collaborative Summarization of Topic-Related Videos
Large collections of videos are grouped into clusters by a topic keyword,
such as Eiffel Tower or Surfing, with many important visual concepts repeating
across them. Such a topically close set of videos have mutual influence on each
other, which could be used to summarize one of them by exploiting information
from others in the set. We build on this intuition to develop a novel approach
to extract a summary that simultaneously captures both important
particularities arising in the given video, as well as, generalities identified
from the set of videos. The topic-related videos provide visual context to
identify the important parts of the video being summarized. We achieve this by
developing a collaborative sparse optimization method which can be efficiently
solved by a half-quadratic minimization algorithm. Our work builds upon the
idea of collaborative techniques from information retrieval and natural
language processing, which typically use the attributes of other similar
objects to predict the attribute of a given object. Experiments on two
challenging and diverse datasets well demonstrate the efficacy of our approach
over state-of-the-art methods.Comment: CVPR 201
Concept hierarchy across languages in text-based image retrieval: a user evaluation
The University of Sheffield participated in Interactive ImageCLEF 2005 with a comparative user
evaluation of two interfaces: one displaying search results as a list, the other organizing retrieved images into
a hierarchy of concepts displayed on the interface as an interactive menu. Data was analysed with respect to
effectiveness (number of images retrieved), efficiency (time needed) and user satisfaction (opinions from
questionnaires). Effectiveness and efficiency were calculated at both 5 minutes (CLEF condition) and at final
time. The list was marginally more effective than the menu at 5 minutes (no statistical significance) but the
two were equal at final time showing the menu needs more time to be effectively used. The list was more efficient
at both 5 minutes and final time, although the difference was not statistically significant. Users preferred
the menu (75% vs. 25% for the list) indicating it to be an interesting and engaging feature. An inspection
of the logs showed that 11% of effective terms (i.e. no stop-words, single terms) were not translated and
that another 5% were ill translations. Some of those terms were used by all participants and were fundamental
for some of the tasks. Non translated and ill translated terms negatively affected the search, hierarchy generation
and, results display. More work has to be carried out to test the system under different setting, e.g. using
a dictionary instead of MT that appears to be ineffective in translating users’ queries that rarely are
grammatically correct. The evaluation also indicated directions for a new interface design that allows the user
to check query translation (in both input and output) and that incorporates visual content image retrieval to
improve result organization
Spoken content retrieval: A survey of techniques and technologies
Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
Query-driven document partitioning and collection selection
Abstract — We present a novel strategy to partition a document collection onto several servers and to perform effective collection selection. The method is based on the analysis of query logs. We proposed a novel document representation called query-vectors model. Each document is represented as a list recording the queries for which the document itself is a match, along with their ranks. To both partition the collection and build the collection selection function, we co-cluster queries and documents. The document clusters are then assigned to the underlying IR servers, while the query clusters represent queries that return similar results, and are used for collection selection. We show that this document partition strategy greatly boosts the performance of standard collection selection algorithms, including CORI, w.r.t. a round-robin assignment. Secondly, we show that performing collection selection by matching the query to the existing query clusters and successively choosing only one server, we reach an average precision-at-5 up to 1.74 and we constantly improve CORI precision of a factor between 11 % and 15%. As a side result we show a way to select rarely asked-for documents. Separating these documents from the rest of the collection allows the indexer to produce a more compact index containing only relevant documents that are likely to be requested in the future. In our tests, around 52 % of the documents (3,128,366) are not returned among the first 100 top-ranked results of any query. I
Ranking deep web text collections for scalable information extraction
Information extraction (IE) systems discover structured in-formation from natural language text, to enable much richer querying and data mining than possible directly over the unstructured text. Unfortunately, IE is generally a com-putationally expensive process, and hence improving its ef-ficiency, so that it scales over large volumes of text, is of critical importance. State-of-the-art approaches for scaling the IE process focus on one text collection at a time. These approaches prioritize the extraction effort by learning key-word queries to identify the “useful ” documents for the IE task at hand, namely, those that lead to the extraction of structured “tuples. ” These approaches, however, do not at-tempt to predict which text collections are useful for the IE task—and hence merit further processing—and which ones will not contribute any useful output—and hence should be ignored altogether, for efficiency. In this paper, we focus on an especially valuable family of text sources, the so-called deep web collections, whose (remote) contents are only ac-cessible via querying. Specifically, we introduce and study techniques for ranking deep web collections for an IE task, to prioritize the extraction effort by focusing on collections with substantial numbers of useful documents for the task. We study both (adaptations of) state-of-the-art resource se-lection strategies for distributed information retrieval, and IE-specific approaches. Our extensive experimental eval-uation over realistic deep web collections, and for several different IE tasks, shows the merits and limitations of the alternative families of approaches, and provides a roadmap for addressing this critically important building block for efficient, scalable information extraction. 1
Relevance-Promoting Language Model for Short-Text Conversation
Despite the effectiveness of sequence-to-sequence framework on the task of
Short-Text Conversation (STC), the issue of under-exploitation of training data
(i.e., the supervision signals from query text is \textit{ignored}) still
remains unresolved. Also, the adopted \textit{maximization}-based decoding
strategies, inclined to generating the generic responses or responses with
repetition, are unsuited to the STC task. In this paper, we propose to
formulate the STC task as a language modeling problem and tailor-make a
training strategy to adapt a language model for response generation. To enhance
generation performance, we design a relevance-promoting transformer language
model, which performs additional supervised source attention after the
self-attention to increase the importance of informative query tokens in
calculating the token-level representation. The model further refines the query
representation with relevance clues inferred from its multiple references
during training. In testing, we adopt a
\textit{randomization-over-maximization} strategy to reduce the generation of
generic responses. Experimental results on a large Chinese STC dataset
demonstrate the superiority of the proposed model on relevance metrics and
diversity metrics.\footnote{Code available at
https://ai.tencent.com/ailab/nlp/dialogue/.Comment: AAAI 202
Reply With: Proactive Recommendation of Email Attachments
Email responses often contain items-such as a file or a hyperlink to an
external document-that are attached to or included inline in the body of the
message. Analysis of an enterprise email corpus reveals that 35% of the time
when users include these items as part of their response, the attachable item
is already present in their inbox or sent folder. A modern email client can
proactively retrieve relevant attachable items from the user's past emails
based on the context of the current conversation, and recommend them for
inclusion, to reduce the time and effort involved in composing the response. In
this paper, we propose a weakly supervised learning framework for recommending
attachable items to the user. As email search systems are commonly available,
we constrain the recommendation task to formulating effective search queries
from the context of the conversations. The query is submitted to an existing IR
system to retrieve relevant items for attachment. We also present a novel
strategy for generating labels from an email corpus---without the need for
manual annotations---that can be used to train and evaluate the query
formulation model. In addition, we describe a deep convolutional neural network
that demonstrates satisfactory performance on this query formulation task when
evaluated on the publicly available Avocado dataset and a proprietary dataset
of internal emails obtained through an employee participation program.Comment: CIKM2017. Proceedings of the 26th ACM International Conference on
Information and Knowledge Management. 201
Context-Based Quotation Recommendation
While composing a new document, anything from a news article to an email or
essay, authors often utilize direct quotes from a variety of sources. Although
an author may know what point they would like to make, selecting an appropriate
quote for the specific context may be time-consuming and difficult. We
therefore propose a novel context-aware quote recommendation system which
utilizes the content an author has already written to generate a ranked list of
quotable paragraphs and spans of tokens from a given source document.
We approach quote recommendation as a variant of open-domain question
answering and adapt the state-of-the-art BERT-based methods from open-QA to our
task. We conduct experiments on a collection of speech transcripts and
associated news articles, evaluating models' paragraph ranking and span
prediction performances. Our experiments confirm the strong performance of
BERT-based methods on this task, which outperform bag-of-words and neural
ranking baselines by more than 30% relative across all ranking metrics.
Qualitative analyses show the difficulty of the paragraph and span
recommendation tasks and confirm the quotability of the best BERT model's
predictions, even if they are not the true selected quotes from the original
news articles.Comment: 12 pages, 3 figure
- …