8,479 research outputs found
Setting per-field normalisation hyper-parameters for the named-page finding search task
Per-field normalisation has been shown to be effective for Web search tasks, e.g. named-page finding. However, per-field normalisation also suffers from having hyper-parameters to tune on a per-field basis. In this paper, we argue that the purpose of per-field normalisation is to adjust the linear relationship between field length and term frequency. We experiment with standard Web test collections, using three document fields, namely the body of the document, its title, and the anchor text of its incoming links. From our experiments, we find that across different collections, the linear correlation values, given by the optimised hyper-parameter settings, are proportional to the maximum negative linear correlation. Based on this observation, we devise an automatic method for setting the per-field normalisation hyper-parameter values without the use of relevance assessment for tuning. According to the evaluation results, this method is shown to be effective for the body and title fields. In addition, the difficulty in setting the per-field normalisation hyper-parameter for the anchor text field is explained
Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness
We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.Comment: 37 pages, 8 figures A short version of this paper was already
published at ECML/PKDD 201
Deep Item-based Collaborative Filtering for Top-N Recommendation
Item-based Collaborative Filtering(short for ICF) has been widely adopted in
recommender systems in industry, owing to its strength in user interest
modeling and ease in online personalization. By constructing a user's profile
with the items that the user has consumed, ICF recommends items that are
similar to the user's profile. With the prevalence of machine learning in
recent years, significant processes have been made for ICF by learning item
similarity (or representation) from data. Nevertheless, we argue that most
existing works have only considered linear and shallow relationship between
items, which are insufficient to capture the complicated decision-making
process of users.
In this work, we propose a more expressive ICF solution by accounting for the
nonlinear and higher-order relationship among items. Going beyond modeling only
the second-order interaction (e.g. similarity) between two items, we
additionally consider the interaction among all interacted item pairs by using
nonlinear neural networks. Through this way, we can effectively model the
higher-order relationship among items, capturing more complicated effects in
user decision-making. For example, it can differentiate which historical
itemsets in a user's profile are more important in affecting the user to make a
purchase decision on an item. We treat this solution as a deep variant of ICF,
thus term it as DeepICF. To justify our proposal, we perform empirical studies
on two public datasets from MovieLens and Pinterest. Extensive experiments
verify the highly positive effect of higher-order item interaction modeling
with nonlinear neural networks. Moreover, we demonstrate that by more
fine-grained second-order interaction modeling with attention network, the
performance of our DeepICF method can be further improved.Comment: 25 pages, submitted to TOI
Training Curricula for Open Domain Answer Re-Ranking
In precision-oriented tasks like answer ranking, it is more important to rank
many relevant answers highly than to retrieve all relevant answers. It follows
that a good ranking strategy would be to learn how to identify the easiest
correct answers first (i.e., assign a high ranking score to answers that have
characteristics that usually indicate relevance, and a low ranking score to
those with characteristics that do not), before incorporating more complex
logic to handle difficult cases (e.g., semantic matching or reasoning). In this
work, we apply this idea to the training of neural answer rankers using
curriculum learning. We propose several heuristics to estimate the difficulty
of a given training sample. We show that the proposed heuristics can be used to
build a training curriculum that down-weights difficult samples early in the
training process. As the training process progresses, our approach gradually
shifts to weighting all samples equally, regardless of difficulty. We present a
comprehensive evaluation of our proposed idea on three answer ranking datasets.
Results show that our approach leads to superior performance of two leading
neural ranking architectures, namely BERT and ConvKNRM, using both pointwise
and pairwise losses. When applied to a BERT-based ranker, our method yields up
to a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model
trained without a curriculum). This results in models that can achieve
comparable performance to more expensive state-of-the-art techniques.Comment: Accepted at SIGIR 2020 (long
- …