926 research outputs found
Toward a Robust Diversity-Based Model to Detect Changes of Context
Being able to automatically and quickly understand the user context during a
session is a main issue for recommender systems. As a first step toward
achieving that goal, we propose a model that observes in real time the
diversity brought by each item relatively to a short sequence of consultations,
corresponding to the recent user history. Our model has a complexity in
constant time, and is generic since it can apply to any type of items within an
online service (e.g. profiles, products, music tracks) and any application
domain (e-commerce, social network, music streaming), as long as we have
partial item descriptions. The observation of the diversity level over time
allows us to detect implicit changes. In the long term, we plan to characterize
the context, i.e. to find common features among a contiguous sub-sequence of
items between two changes of context determined by our model. This will allow
us to make context-aware and privacy-preserving recommendations, to explain
them to users. As this is an ongoing research, the first step consists here in
studying the robustness of our model while detecting changes of context. In
order to do so, we use a music corpus of 100 users and more than 210,000
consultations (number of songs played in the global history). We validate the
relevancy of our detections by finding connections between changes of context
and events, such as ends of session. Of course, these events are a subset of
the possible changes of context, since there might be several contexts within a
session. We altered the quality of our corpus in several manners, so as to test
the performances of our model when confronted with sparsity and different types
of items. The results show that our model is robust and constitutes a promising
approach.Comment: 27th IEEE International Conference on Tools with Artificial
Intelligence (ICTAI 2015), Nov 2015, Vietri sul Mare, Ital
Evaluating the effectiveness of explanations for recommender systems : Methodological issues and empirical studies on the impact of personalization
Peer reviewedPostprin
Leveraging Large Language Models in Conversational Recommender Systems
A Conversational Recommender System (CRS) offers increased transparency and
control to users by enabling them to engage with the system through a real-time
multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an
unprecedented ability to converse naturally and incorporate world knowledge and
common-sense reasoning into language understanding, unlocking the potential of
this paradigm. However, effectively leveraging LLMs within a CRS introduces new
technical challenges, including properly understanding and controlling a
complex conversation and retrieving from external sources of information. These
issues are exacerbated by a large, evolving item corpus and a lack of
conversational data for training. In this paper, we provide a roadmap for
building an end-to-end large-scale CRS using LLMs. In particular, we propose
new implementations for user preference understanding, flexible dialogue
management and explainable recommendations as part of an integrated
architecture powered by LLMs. For improved personalization, we describe how an
LLM can consume interpretable natural language user profiles and use them to
modulate session-level context. To overcome conversational data limitations in
the absence of an existing production CRS, we propose techniques for building a
controllable LLM-based user simulator to generate synthetic conversations. As a
proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos
built on LaMDA, and demonstrate its fluency and diverse functionality through
some illustrative example conversations
A conversational collaborative filtering approach to recommendation
Recent work has shown the value of treating recommendation as a conversation between user and system, which conversational recommenders have done by allowing feedback like ānot as expensive as thisā on recommendations. This allows a more natural alternative to content-based information access. Our research focuses on creating a viable conversational methodology for collaborative-filtering recommendation which can apply to any kind of information, especially visual. Since collaborative filtering does not have an intrinsic understanding of the items it suggests, i.e. it doesnāt understand the content, it has no obvious mechanism for conversation. Here we develop a means by which a recommender driven purely by collaborative filtering can sustain a conversation with a user and in our evaluation we show that it enables finding multimedia items that the user wants without requiring domain knowledge
- ā¦