16 research outputs found
Neural Networks for Information Retrieval
Machine learning plays a role in many aspects of modern IR systems, and deep
learning is applied in all of them. The fast pace of modern-day research has
given rise to many different approaches for many different IR problems. The
amount of information available can be overwhelming both for junior students
and for experienced researchers looking for new research topics and directions.
Additionally, it is interesting to see what key insights into IR problems the
new technologies are able to give us. The aim of this full-day tutorial is to
give a clear overview of current tried-and-trusted neural methods in IR and how
they benefit IR research. It covers key architectures, as well as the most
promising future directions.Comment: Overview of full-day tutorial at SIGIR 201
Enhancing Sensitivity Classification with Semantic Features using Word Embeddings
Government documents must be reviewed to identify any sensitive information
they may contain, before they can be released to the public. However,
traditional paper-based sensitivity review processes are not practical for reviewing
born-digital documents. Therefore, there is a timely need for automatic sensitivity
classification techniques, to assist the digital sensitivity review process.
However, sensitivity is typically a product of the relations between combinations
of terms, such as who said what about whom, therefore, automatic sensitivity
classification is a difficult task. Vector representations of terms, such as word
embeddings, have been shown to be effective at encoding latent term features
that preserve semantic relations between terms, which can also be beneficial to
sensitivity classification. In this work, we present a thorough evaluation of the
effectiveness of semantic word embedding features, along with term and grammatical
features, for sensitivity classification. On a test collection of government
documents containing real sensitivities, we show that extending text classification
with semantic features and additional term n-grams results in significant improvements
in classification effectiveness, correctly classifying 9.99% more sensitive
documents compared to the text classification baseline
Review on Information Retrieval for Desktop Search Engine
Search is an important aspect of information management often taken for granted. Domain specific repositories are growing in both size and numbers calling for efficient search and retrieval of documents. This paper explores the possible techniques and necessary system components for a search engine charting several iterative optimizations over the last few years. This paper focuses on NLP models while retaining basic principles from other methods that assist in information search
Concept Embedding for Information Retrieval
Concepts are used to solve the term-mismatch problem. However, we need an effective similarity measure between concepts. Word embedding presents a promising solution. We present in this study three approaches to build concepts vectors based on words vectors. We use a vector-based measure to estimate inter-concepts similarity. Our experiments show promising results. Furthermore, words and concepts become comparable. This could be used to improve conceptual indexing process
Enhancing Translation Language Models with Word Embedding for Information Retrieval
In this paper, we explore the usage of Word Embedding semantic resources for
Information Retrieval (IR) task. This embedding, produced by a shallow neural
network, have been shown to catch semantic similarities between words (Mikolov
et al., 2013). Hence, our goal is to enhance IR Language Models by addressing
the term mismatch problem. To do so, we applied the model presented in the
paper Integrating and Evaluating Neural Word Embedding in Information Retrieval
by Zuccon et al. (2015) that proposes to estimate the translation probability
of a Translation Language Model using the cosine similarity between Word
Embedding. The results we obtained so far did not show a statistically
significant improvement compared to classical Language Model
End-to-End Neural Ad-hoc Ranking with Kernel Pooling
This paper proposes K-NRM, a kernel based neural model for document ranking.
Given a query and a set of documents, K-NRM uses a translation matrix that
models word-level similarities via word embeddings, a new kernel-pooling
technique that uses kernels to extract multi-level soft match features, and a
learning-to-rank layer that combines those features into the final ranking
score. The whole model is trained end-to-end. The ranking layer learns desired
feature patterns from the pairwise ranking loss. The kernels transfer the
feature patterns into soft-match targets at each similarity level and enforce
them on the translation matrix. The word embeddings are tuned accordingly so
that they can produce the desired soft matches. Experiments on a commercial
search engine's query log demonstrate the improvements of K-NRM over prior
feature-based and neural-based states-of-the-art, and explain the source of
K-NRM's advantage: Its kernel-guided embedding encodes a similarity metric
tailored for matching query words to document words, and provides effective
multi-level soft matches
Neural Vector Spaces for Unsupervised Information Retrieval
We propose the Neural Vector Space Model (NVSM), a method that learns
representations of documents in an unsupervised manner for news article
retrieval. In the NVSM paradigm, we learn low-dimensional representations of
words and documents from scratch using gradient descent and rank documents
according to their similarity with query representations that are composed from
word representations. We show that NVSM performs better at document ranking
than existing latent semantic vector space methods. The addition of NVSM to a
mixture of lexical language models and a state-of-the-art baseline vector space
model yields a statistically significant increase in retrieval effectiveness.
Consequently, NVSM adds a complementary relevance signal. Next to semantic
matching, we find that NVSM performs well in cases where lexical matching is
needed.
NVSM learns a notion of term specificity directly from the document
collection without feature engineering. We also show that NVSM learns
regularities related to Luhn significance. Finally, we give advice on how to
deploy NVSM in situations where model selection (e.g., cross-validation) is
infeasible. We find that an unsupervised ensemble of multiple models trained
with different hyperparameter values performs better than a single
cross-validated model. Therefore, NVSM can safely be used for ranking documents
without supervised relevance judgments.Comment: TOIS 201