1,413 research outputs found
Sensitive and Scalable Online Evaluation with Theoretical Guarantees
Multileaved comparison methods generalize interleaved comparison methods to
provide a scalable approach for comparing ranking systems based on regular user
interactions. Such methods enable the increasingly rapid research and
development of search engines. However, existing multileaved comparison methods
that provide reliable outcomes do so by degrading the user experience during
evaluation. Conversely, current multileaved comparison methods that maintain
the user experience cannot guarantee correctness. Our contribution is two-fold.
First, we propose a theoretical framework for systematically comparing
multileaved comparison methods using the notions of considerateness, which
concerns maintaining the user experience, and fidelity, which concerns reliable
correct outcomes. Second, we introduce a novel multileaved comparison method,
Pairwise Preference Multileaving (PPM), that performs comparisons based on
document-pair preferences, and prove that it is considerate and has fidelity.
We show empirically that, compared to previous multileaved comparison methods,
PPM is more sensitive to user preferences and scalable with the number of
rankers being compared.Comment: CIKM 2017, Proceedings of the 2017 ACM on Conference on Information
and Knowledge Managemen
Differentiable Unbiased Online Learning to Rank
Online Learning to Rank (OLTR) methods optimize rankers based on user
interactions. State-of-the-art OLTR methods are built specifically for linear
models. Their approaches do not extend well to non-linear models such as neural
networks. We introduce an entirely novel approach to OLTR that constructs a
weighted differentiable pairwise loss after each interaction: Pairwise
Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional
approach that relies on interleaving or multileaving and extensive sampling of
models to estimate gradients. Instead, its gradient is based on inferring
preferences between document pairs from user clicks and can optimize any
differentiable model. We prove that the gradient of PDGD is unbiased w.r.t.
user document pair preferences. Our experiments on the largest publicly
available Learning to Rank (LTR) datasets show considerable and significant
improvements under all levels of interaction noise. PDGD outperforms existing
OLTR methods both in terms of learning speed as well as final convergence.
Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear
models to be optimized effectively. Our results show that using a neural
network leads to even better performance at convergence than a linear model. In
summary, PDGD is an efficient and unbiased OLTR approach that provides a better
user experience than previously possible.Comment: Conference on Information and Knowledge Management 201
Optimizing Ranking Models in an Online Setting
Online Learning to Rank (OLTR) methods optimize ranking models by directly
interacting with users, which allows them to be very efficient and responsive.
All OLTR methods introduced during the past decade have extended on the
original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a
fundamentally different approach was introduced with the Pairwise
Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons
of the two approaches are limited to simulations with cascading click models
and low levels of noise. The main outcome so far is that PDGD converges at
higher levels of performance and learns considerably faster than DBGD-based
methods. However, the PDGD algorithm assumes cascading user behavior,
potentially giving it an unfair advantage. Furthermore, the robustness of both
methods to high levels of noise has not been investigated. Therefore, it is
unclear whether the reported advantages of PDGD over DBGD generalize to
different experimental conditions. In this paper, we investigate whether the
previous conclusions about the PDGD and DBGD comparison generalize from ideal
to worst-case circumstances. We do so in two ways. First, we compare the
theoretical properties of PDGD and DBGD, by taking a critical look at
previously proven properties in the context of ranking. Second, we estimate an
upper and lower bound on the performance of methods by simulating both ideal
user behavior and extremely difficult behavior, i.e., almost-random
non-cascading user models. Our findings show that the theoretical bounds of
DBGD do not apply to any common ranking model and, furthermore, that the
performance of DBGD is substantially worse than PDGD in both ideal and
worst-case circumstances. These results reproduce previously published findings
about the relative performance of PDGD vs. DBGD and generalize them to
extremely noisy and non-cascading circumstances.Comment: European Conference on Information Retrieval (ECIR) 201
A Probabilistic Model for the Cold-Start Problem in Rating Prediction using Click Data
One of the most efficient methods in collaborative filtering is matrix
factorization, which finds the latent vector representations of users and items
based on the ratings of users to items. However, a matrix factorization based
algorithm suffers from the cold-start problem: it cannot find latent vectors
for items to which previous ratings are not available. This paper utilizes
click data, which can be collected in abundance, to address the cold-start
problem. We propose a probabilistic item embedding model that learns item
representations from click data, and a model named EMB-MF, that connects it
with a probabilistic matrix factorization for rating prediction. The
experiments on three real-world datasets demonstrate that the proposed model is
not only effective in recommending items with no previous ratings, but also
outperforms competing methods, especially when the data is very sparse.Comment: ICONIP 201
Dynamic Poisson Factorization
Models for recommender systems use latent factors to explain the preferences
and behaviors of users with respect to a set of items (e.g., movies, books,
academic papers). Typically, the latent factors are assumed to be static and,
given these factors, the observed preferences and behaviors of users are
assumed to be generated without order. These assumptions limit the explorative
and predictive capabilities of such models, since users' interests and item
popularity may evolve over time. To address this, we propose dPF, a dynamic
matrix factorization model based on the recent Poisson factorization model for
recommendations. dPF models the time evolving latent factors with a Kalman
filter and the actions with Poisson distributions. We derive a scalable
variational inference algorithm to infer the latent factors. Finally, we
demonstrate dPF on 10 years of user click data from arXiv.org, one of the
largest repository of scientific papers and a formidable source of information
about the behavior of scientists. Empirically we show performance improvement
over both static and, more recently proposed, dynamic recommendation models. We
also provide a thorough exploration of the inferred posteriors over the latent
variables.Comment: RecSys 201
Balancing Speed and Quality in Online Learning to Rank for Information Retrieval
In Online Learning to Rank (OLTR) the aim is to find an optimal ranking model
by interacting with users. When learning from user behavior, systems must
interact with users while simultaneously learning from those interactions.
Unlike other Learning to Rank (LTR) settings, existing research in this field
has been limited to linear models. This is due to the speed-quality tradeoff
that arises when selecting models: complex models are more expressive and can
find the best rankings but need more user interactions to do so, a requirement
that risks frustrating users during training. Conversely, simpler models can be
optimized on fewer interactions and thus provide a better user experience, but
they will converge towards suboptimal rankings. This tradeoff creates a
deadlock, since novel models will not be able to improve either the user
experience or the final convergence point, without sacrificing the other. Our
contribution is twofold. First, we introduce a fast OLTR model called Sim-MGD
that addresses the speed aspect of the speed-quality tradeoff. Sim-MGD ranks
documents based on similarities with reference documents. It converges rapidly
and, hence, gives a better user experience but it does not converge towards the
optimal rankings. Second, we contribute Cascading Multileave Gradient Descent
(C-MGD) for OLTR that directly addresses the speed-quality tradeoff by using a
cascade that enables combinations of the best of two worlds: fast learning and
high quality final convergence. C-MGD can provide the better user experience of
Sim-MGD while maintaining the same convergence as the state-of-the-art MGD
model. This opens the door for future work to design new models for OLTR
without having to deal with the speed-quality tradeoff.Comment: CIKM 2017, Proceedings of the 2017 ACM on Conference on Information
and Knowledge Managemen
Diverse personalized recommendations with uncertainty from implicit preference data with the Bayesian Mallows Model
Clicking data, which exists in abundance and contains objective user
preference information, is widely used to produce personalized recommendations
in web-based applications. Current popular recommendation algorithms, typically
based on matrix factorizations, often have high accuracy and achieve good
clickthrough rates. However, diversity of the recommended items, which can
greatly enhance user experiences, is often overlooked. Moreover, most
algorithms do not produce interpretable uncertainty quantifications of the
recommendations. In this work, we propose the Bayesian Mallows for Clicking
Data (BMCD) method, which augments clicking data into compatible full ranking
vectors by enforcing all the clicked items to be top-ranked. User preferences
are learned using a Mallows ranking model. Bayesian inference leads to
interpretable uncertainties of each individual recommendation, and we also
propose a method to make personalized recommendations based on such
uncertainties. With a simulation study and a real life data example, we
demonstrate that compared to state-of-the-art matrix factorization, BMCD makes
personalized recommendations with similar accuracy, while achieving much higher
level of diversity, and producing interpretable and actionable uncertainty
estimation.Comment: 27 page
Recurrent Poisson Factorization for Temporal Recommendation
Poisson factorization is a probabilistic model of users and items for
recommendation systems, where the so-called implicit consumer data is modeled
by a factorized Poisson distribution. There are many variants of Poisson
factorization methods who show state-of-the-art performance on real-world
recommendation tasks. However, most of them do not explicitly take into account
the temporal behavior and the recurrent activities of users which is essential
to recommend the right item to the right user at the right time. In this paper,
we introduce Recurrent Poisson Factorization (RPF) framework that generalizes
the classical PF methods by utilizing a Poisson process for modeling the
implicit feedback. RPF treats time as a natural constituent of the model and
brings to the table a rich family of time-sensitive factorization models. To
elaborate, we instantiate several variants of RPF who are capable of handling
dynamic user preferences and item specification (DRPF), modeling the
social-aspect of product adoption (SRPF), and capturing the consumption
heterogeneity among users and items (HRPF). We also develop a variational
algorithm for approximate posterior inference that scales up to massive data
sets. Furthermore, we demonstrate RPF's superior performance over many
state-of-the-art methods on synthetic dataset, and large scale real-world
datasets on music streaming logs, and user-item interactions in M-Commerce
platforms.Comment: Submitted to KDD 2017 | Halifax, Nova Scotia - Canada - sigkdd, Codes
are available at https://github.com/AHosseini/RP
- …