28 research outputs found
Neural Document Expansion with User Feedback
This paper presents a neural document expansion approach (NeuDEF) that
enriches document representations for neural ranking models. NeuDEF harvests
expansion terms from queries which lead to clicks on the document and weights
these expansion terms with learned attention. It is plugged into a standard
neural ranker and learned end-to-end. Experiments on a commercial search log
demonstrate that NeuDEF significantly improves the accuracy of state-of-the-art
neural rankers and expansion methods on queries with different frequencies.
Further studies show the contribution of click queries and learned expansion
weights, as well as the influence of document popularity of NeuDEF's
effectiveness.Comment: The 2019 ACM SIGIR International Conference on the Theory of
Information Retrieva
Table Search Using a Deep Contextualized Language Model
Pretrained contextualized language models such as BERT have achieved
impressive results on various natural language processing benchmarks.
Benefiting from multiple pretraining tasks and large scale training corpora,
pretrained models can capture complex syntactic word relations. In this paper,
we use the deep contextualized language model BERT for the task of ad hoc table
retrieval. We investigate how to encode table content considering the table
structure and input length limit of BERT. We also propose an approach that
incorporates features from prior literature on table retrieval and jointly
trains them with BERT. In experiments on public datasets, we show that our best
approach can outperform the previous state-of-the-art method and BERT baselines
with a large margin under different evaluation metrics.Comment: Accepted at SIGIR 2020 (Long