4,714 research outputs found
Anticipating Information Needs Based on Check-in Activity
In this work we address the development of a smart personal assistant that is
capable of anticipating a user's information needs based on a novel type of
context: the person's activity inferred from her check-in records on a
location-based social network. Our main contribution is a method that
translates a check-in activity into an information need, which is in turn
addressed with an appropriate information card. This task is challenging
because of the large number of possible activities and related information
needs, which need to be addressed in a mobile dashboard that is limited in
size. Our approach considers each possible activity that might follow after the
last (and already finished) activity, and selects the top information cards
such that they maximize the likelihood of satisfying the user's information
needs for all possible future scenarios. The proposed models also incorporate
knowledge about the temporal dynamics of information needs. Using a combination
of historical check-in data and manual assessments collected via crowdsourcing,
we show experimentally the effectiveness of our approach.Comment: Proceedings of the 10th ACM International Conference on Web Search
and Data Mining (WSDM '17), 201
Trie-NLG: Trie Context Augmentation to Improve Personalized Query Auto-Completion for Short and Unseen Prefixes
Query auto-completion (QAC) aims at suggesting plausible completions for a
given query prefix. Traditionally, QAC systems have leveraged tries curated
from historical query logs to suggest most popular completions. In this
context, there are two specific scenarios that are difficult to handle for any
QAC system: short prefixes (which are inherently ambiguous) and unseen
prefixes. Recently, personalized Natural Language Generation (NLG) models have
been proposed to leverage previous session queries as context for addressing
these two challenges. However, such NLG models suffer from two drawbacks: (1)
some of the previous session queries could be noisy and irrelevant to the user
intent for the current prefix, and (2) NLG models cannot directly incorporate
historical query popularity. This motivates us to propose a novel NLG model for
QAC, Trie-NLG, which jointly leverages popularity signals from trie and
personalization signals from previous session queries. We train the Trie-NLG
model by augmenting the prefix with rich context comprising of recent session
queries and top trie completions. This simple modeling approach overcomes the
limitations of trie-based and NLG-based approaches and leads to
state-of-the-art performance. We evaluate the Trie-NLG model using two large
QAC datasets. On average, our model achieves huge ~57% and ~14% boost in MRR
over the popular trie-based lookup and the strong BART-based baseline methods,
respectively. We make our code publicly available.Comment: Accepted at Journal Track of ECML-PKDD 202
- …