67 research outputs found
Exploiting past users’ interests and predictions in an active learning method for dealing with cold start in recommender systems
This paper focuses on the new users cold-start issue in the context of recommender systems. New users who do not receive pertinent recommendations may abandon the system. In order to cope with this issue, we use active learning techniques. These methods engage the new users to interact with the system by presenting them with a questionnaire that aims to understand their preferences
to the related items. In this paper, we propose an active learning technique that exploits past users’ interests and past users’ predictions in order to identify the best questions to ask. Our technique achieves a better performance in terms of precision (RMSE), which leads to learn the users’ preferences in less questions. The experimentations were carried out in a small and public dataset to prove the applicability for handling cold start issues
Gradient-based Optimization for Bayesian Preference Elicitation
Effective techniques for eliciting user preferences have taken on added
importance as recommender systems (RSs) become increasingly interactive and
conversational. A common and conceptually appealing Bayesian criterion for
selecting queries is expected value of information (EVOI). Unfortunately, it is
computationally prohibitive to construct queries with maximum EVOI in RSs with
large item spaces. We tackle this issue by introducing a continuous formulation
of EVOI as a differentiable network that can be optimized using gradient
methods available in modern machine learning (ML) computational frameworks
(e.g., TensorFlow, PyTorch). We exploit this to develop a novel, scalable Monte
Carlo method for EVOI optimization, which is more scalable for large item
spaces than methods requiring explicit enumeration of items. While we emphasize
the use of this approach for pairwise (or k-wise) comparisons of items, we also
demonstrate how our method can be adapted to queries involving subsets of item
attributes or "partial items," which are often more cognitively manageable for
users. Experiments show that our gradient-based EVOI technique achieves
state-of-the-art performance across several domains while scaling to large item
spaces.Comment: To appear in the Thirty-Fourth AAAI Conference on Artificial
Intelligence (AAAI-20
Modeling Recommender Ecosystems: Research Challenges at the Intersection of Mechanism Design, Reinforcement Learning and Generative Models
Modern recommender systems lie at the heart of complex ecosystems that couple
the behavior of users, content providers, advertisers, and other actors.
Despite this, the focus of the majority of recommender research -- and most
practical recommenders of any import -- is on the local, myopic optimization of
the recommendations made to individual users. This comes at a significant cost
to the long-term utility that recommenders could generate for its users. We
argue that explicitly modeling the incentives and behaviors of all actors in
the system -- and the interactions among them induced by the recommender's
policy -- is strictly necessary if one is to maximize the value the system
brings to these actors and improve overall ecosystem "health". Doing so
requires: optimization over long horizons using techniques such as
reinforcement learning; making inevitable tradeoffs in the utility that can be
generated for different actors using the methods of social choice; reducing
information asymmetry, while accounting for incentives and strategic behavior,
using the tools of mechanism design; better modeling of both user and
item-provider behaviors by incorporating notions from behavioral economics and
psychology; and exploiting recent advances in generative and foundation models
to make these mechanisms interpretable and actionable. We propose a conceptual
framework that encompasses these elements, and articulate a number of research
challenges that emerge at the intersection of these different disciplines
Interactive collaborative filtering
In this paper, we study collaborative filtering (CF) in an interactive setting, in which a recommender system continuously recommends items to individual users and receives interactive feedback. Whilst users enjoy sequential recommendations, the recommendation predictions are constantly refined using up-to-date feedback on the recommended items. Bringing the interactive mechanism back to the CF process is fundamental because the ultimate goal for a rec-ommender system is about the discovery of interesting items for individual users and yet users' personal preferences and contexts evolve over time during the interactions with the system. This requires us not to distinguish between the stages of collecting information to construct the user profile and making recommendations, but to seamlessly integrate these stages together during the interactive process, with the goal of maximizing the overall recommendation accuracy throughout the interactions. This mechanism naturally addresses the cold-start problem as any user can immediately receive sequential recommendations without providing ratings beforehand. We formulate the interactive CF with the probabilistic matrix factorization (PMF) framework, and leverage several exploitation-exploration algorithms to select items, including the empirical Thompson sampling and upper confidence bound based algorithms. We conduct our experiment on cold-start users as well as warm-start users with drifting taste. Results show that the proposed methods have significant improvements over several strong baselines for the MovieLens, EachMovie and Netflix datasets. Copyright 2013 ACM
Active learning and search on low-rank matrices
Collaborative prediction is a powerful technique, useful in domains from recommender systems to guiding the scien-tific discovery process. Low-rank matrix factorization is one of the most powerful tools for collaborative prediction. This work presents a general approach for active collabora-tive prediction with the Probabilistic Matrix Factorization model. Using variational approximations or Markov chain Monte Carlo sampling to estimate the posterior distribution over models, we can choose query points to maximize our un-derstanding of the model, to best predict unknown elements of the data matrix, or to find as many “positive ” data points as possible. We evaluate our methods on simulated data, and also show their applicability to movie ratings prediction and the discovery of drug-target interactions
- …