903 research outputs found
ANTIQUE: A Non-Factoid Question Answering Benchmark
Considering the widespread use of mobile and voice search, answer passage
retrieval for non-factoid questions plays a critical role in modern information
retrieval systems. Despite the importance of the task, the community still
feels the significant lack of large-scale non-factoid question answering
collections with real questions and comprehensive relevance judgments. In this
paper, we develop and release a collection of 2,626 open-domain non-factoid
questions from a diverse set of categories. The dataset, called ANTIQUE,
contains 34,011 manual relevance annotations. The questions were asked by real
users in a community question answering service, i.e., Yahoo! Answers.
Relevance judgments for all the answers to each question were collected through
crowdsourcing. To facilitate further research, we also include a brief analysis
of the data as well as baseline results on both classical and recently
developed neural IR models
Target Type Identification for Entity-Bearing Queries
Identifying the target types of entity-bearing queries can help improve
retrieval performance as well as the overall search experience. In this work,
we address the problem of automatically detecting the target types of a query
with respect to a type taxonomy. We propose a supervised learning approach with
a rich variety of features. Using a purpose-built test collection, we show that
our approach outperforms existing methods by a remarkable margin. This is an
extended version of the article published with the same title in the
Proceedings of SIGIR'17.Comment: Extended version of SIGIR'17 short paper, 5 page
Target Apps Selection: Towards a Unified Search Framework for Mobile Devices
With the recent growth of conversational systems and intelligent assistants
such as Apple Siri and Google Assistant, mobile devices are becoming even more
pervasive in our lives. As a consequence, users are getting engaged with the
mobile apps and frequently search for an information need in their apps.
However, users cannot search within their apps through their intelligent
assistants. This requires a unified mobile search framework that identifies the
target app(s) for the user's query, submits the query to the app(s), and
presents the results to the user. In this paper, we take the first step forward
towards developing unified mobile search. In more detail, we introduce and
study the task of target apps selection, which has various potential real-world
applications. To this aim, we analyze attributes of search queries as well as
user behaviors, while searching with different mobile apps. The analyses are
done based on thousands of queries that we collected through crowdsourcing. We
finally study the performance of state-of-the-art retrieval models for this
task and propose two simple yet effective neural models that significantly
outperform the baselines. Our neural approaches are based on learning
high-dimensional representations for mobile apps. Our analyses and experiments
suggest specific future directions in this research area.Comment: To appear at SIGIR 201
Learning a Deep Listwise Context Model for Ranking Refinement
Learning to rank has been intensively studied and widely applied in
information retrieval. Typically, a global ranking function is learned from a
set of labeled data, which can achieve good performance on average but may be
suboptimal for individual queries by ignoring the fact that relevant documents
for different queries may have different distributions in the feature space.
Inspired by the idea of pseudo relevance feedback where top ranked documents,
which we refer as the \textit{local ranking context}, can provide important
information about the query's characteristics, we propose to use the inherent
feature distributions of the top results to learn a Deep Listwise Context Model
that helps us fine tune the initial ranked list. Specifically, we employ a
recurrent neural network to sequentially encode the top results using their
feature vectors, learn a local context model and use it to re-rank the top
results. There are three merits with our model: (1) Our model can capture the
local ranking context based on the complex interactions between top results
using a deep neural network; (2) Our model can be built upon existing
learning-to-rank methods by directly using their extracted feature vectors; (3)
Our model is trained with an attention-based loss function, which is more
effective and efficient than many existing listwise methods. Experimental
results show that the proposed model can significantly improve the
state-of-the-art learning to rank methods on benchmark retrieval corpora
Diversifying query suggestions based on query documents
Many domain-specific search tasks are initiated by document-length queries, e.g., patent invalidity search aims to find prior art related to a new (query) patent. We call this type of search Query Document Search. In this type of search, the initial query docu-ment is typically long and contains diverse aspects (or sub-topics). Users tend to issue many queries based on the initial document to retrieve relevant documents. To help users in this situation, we propose a method to suggest diverse queries that can cover multi-ple aspects of the query document. We first identify multiple que-ry aspects and then provide diverse query suggestions that are effective for retrieving relevant documents as well being related to more query aspects. In the experiments, we demonstrate that our approach is effective in comparison to previous query suggestion methods
- …