8,187 research outputs found
Relevance-based Word Embedding
Learning a high-dimensional dense representation for vocabulary terms, also
known as a word embedding, has recently attracted much attention in natural
language processing and information retrieval tasks. The embedding vectors are
typically learned based on term proximity in a large corpus. This means that
the objective in well-known word embedding algorithms, e.g., word2vec, is to
accurately predict adjacent word(s) for a given word or context. However, this
objective is not necessarily equivalent to the goal of many information
retrieval (IR) tasks. The primary objective in various IR tasks is to capture
relevance instead of term proximity, syntactic, or even semantic similarity.
This is the motivation for developing unsupervised relevance-based word
embedding models that learn word representations based on query-document
relevance information. In this paper, we propose two learning models with
different objective functions; one learns a relevance distribution over the
vocabulary set for each query, and the other classifies each term as belonging
to the relevant or non-relevant class for each query. To train our models, we
used over six million unique queries and the top ranked documents retrieved in
response to each query, which are assumed to be relevant to the query. We
extrinsically evaluate our learned word representation models using two IR
tasks: query expansion and query classification. Both query expansion
experiments on four TREC collections and query classification experiments on
the KDD Cup 2005 dataset suggest that the relevance-based word embedding models
significantly outperform state-of-the-art proximity-based embedding models,
such as word2vec and GloVe.Comment: to appear in the proceedings of The 40th International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR '17
End-to-end Learning for Short Text Expansion
Effectively making sense of short texts is a critical task for many real
world applications such as search engines, social media services, and
recommender systems. The task is particularly challenging as a short text
contains very sparse information, often too sparse for a machine learning
algorithm to pick up useful signals. A common practice for analyzing short text
is to first expand it with external information, which is usually harvested
from a large collection of longer texts. In literature, short text expansion
has been done with all kinds of heuristics. We propose an end-to-end solution
that automatically learns how to expand short text to optimize a given learning
task. A novel deep memory network is proposed to automatically find relevant
information from a collection of longer documents and reformulate the short
text through a gating mechanism. Using short text classification as a
demonstrating task, we show that the deep memory network significantly
outperforms classical text expansion methods with comprehensive experiments on
real world data sets.Comment: KDD'201
Parameterized Neural Network Language Models for Information Retrieval
Information Retrieval (IR) models need to deal with two difficult issues,
vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to
the difficulty of retrieving relevant documents that do not contain exact query
terms but semantically related terms. Term dependencies refers to the need of
considering the relationship between the words of the query when estimating the
relevance of a document. A multitude of solutions has been proposed to solve
each of these two problems, but no principled model solve both. In parallel, in
the last few years, language models based on neural networks have been used to
cope with complex natural language processing tasks like emotion and paraphrase
detection. Although they present good abilities to cope with both term
dependencies and vocabulary mismatch problems, thanks to the distributed
representation of words they are based upon, such models could not be used
readily in IR, where the estimation of one language model per document (or
query) is required. This is both computationally unfeasible and prone to
over-fitting. Based on a recent work that proposed to learn a generic language
model that can be modified through a set of document-specific parameters, we
explore use of new neural network models that are adapted to ad-hoc IR tasks.
Within the language model IR framework, we propose and study the use of a
generic language model as well as a document-specific language model. Both can
be used as a smoothing component, but the latter is more adapted to the
document at hand and has the potential of being used as a full document
language model. We experiment with such models and analyze their results on
TREC-1 to 8 datasets
Task-Oriented Query Reformulation with Reinforcement Learning
Search engines play an important role in our everyday lives by assisting us
in finding the information we need. When we input a complex query, however,
results are often far from satisfactory. In this work, we introduce a query
reformulation system based on a neural network that rewrites a query to
maximize the number of relevant documents returned. We train this neural
network with reinforcement learning. The actions correspond to selecting terms
to build a reformulated query, and the reward is the document recall. We
evaluate our approach on three datasets against strong baselines and show a
relative improvement of 5-20% in terms of recall. Furthermore, we present a
simple method to estimate a conservative upper-bound performance of a model in
a particular environment and verify that there is still large room for
improvements.Comment: EMNLP 201
Reply With: Proactive Recommendation of Email Attachments
Email responses often contain items-such as a file or a hyperlink to an
external document-that are attached to or included inline in the body of the
message. Analysis of an enterprise email corpus reveals that 35% of the time
when users include these items as part of their response, the attachable item
is already present in their inbox or sent folder. A modern email client can
proactively retrieve relevant attachable items from the user's past emails
based on the context of the current conversation, and recommend them for
inclusion, to reduce the time and effort involved in composing the response. In
this paper, we propose a weakly supervised learning framework for recommending
attachable items to the user. As email search systems are commonly available,
we constrain the recommendation task to formulating effective search queries
from the context of the conversations. The query is submitted to an existing IR
system to retrieve relevant items for attachment. We also present a novel
strategy for generating labels from an email corpus---without the need for
manual annotations---that can be used to train and evaluate the query
formulation model. In addition, we describe a deep convolutional neural network
that demonstrates satisfactory performance on this query formulation task when
evaluated on the publicly available Avocado dataset and a proprietary dataset
of internal emails obtained through an employee participation program.Comment: CIKM2017. Proceedings of the 26th ACM International Conference on
Information and Knowledge Management. 201
Learning a Deep Listwise Context Model for Ranking Refinement
Learning to rank has been intensively studied and widely applied in
information retrieval. Typically, a global ranking function is learned from a
set of labeled data, which can achieve good performance on average but may be
suboptimal for individual queries by ignoring the fact that relevant documents
for different queries may have different distributions in the feature space.
Inspired by the idea of pseudo relevance feedback where top ranked documents,
which we refer as the \textit{local ranking context}, can provide important
information about the query's characteristics, we propose to use the inherent
feature distributions of the top results to learn a Deep Listwise Context Model
that helps us fine tune the initial ranked list. Specifically, we employ a
recurrent neural network to sequentially encode the top results using their
feature vectors, learn a local context model and use it to re-rank the top
results. There are three merits with our model: (1) Our model can capture the
local ranking context based on the complex interactions between top results
using a deep neural network; (2) Our model can be built upon existing
learning-to-rank methods by directly using their extracted feature vectors; (3)
Our model is trained with an attention-based loss function, which is more
effective and efficient than many existing listwise methods. Experimental
results show that the proposed model can significantly improve the
state-of-the-art learning to rank methods on benchmark retrieval corpora
- …