478 research outputs found
Towards Question-based Recommender Systems
Conversational and question-based recommender systems have gained increasing
attention in recent years, with users enabled to converse with the system and
better control recommendations. Nevertheless, research in the field is still
limited, compared to traditional recommender systems. In this work, we propose
a novel Question-based recommendation method, Qrec, to assist users to find
items interactively, by answering automatically constructed and algorithmically
chosen questions. Previous conversational recommender systems ask users to
express their preferences over items or item facets. Our model, instead, asks
users to express their preferences over descriptive item features. The model is
first trained offline by a novel matrix factorization algorithm, and then
iteratively updates the user and item latent factors online by a closed-form
solution based on the user answers. Meanwhile, our model infers the underlying
user belief and preferences over items to learn an optimal question-asking
strategy by using Generalized Binary Search, so as to ask a sequence of
questions to the user. Our experimental results demonstrate that our proposed
matrix factorization model outperforms the traditional Probabilistic Matrix
Factorization model. Further, our proposed Qrec model can greatly improve the
performance of state-of-the-art baselines, and it is also effective in the case
of cold-start user and item recommendations.Comment: accepted by SIGIR 202
Towards Explainable Conversational Recommender Systems
Explanations in conventional recommender systems have demonstrated benefits
in helping the user understand the rationality of the recommendations and
improving the system's efficiency, transparency, and trustworthiness. In the
conversational environment, multiple contextualized explanations need to be
generated, which poses further challenges for explanations. To better measure
explainability in conversational recommender systems (CRS), we propose ten
evaluation perspectives based on concepts from conventional recommender systems
together with the characteristics of CRS. We assess five existing CRS benchmark
datasets using these metrics and observe the necessity of improving the
explanation quality of CRS. To achieve this, we conduct manual and automatic
approaches to extend these dialogues and construct a new CRS dataset, namely
Explainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues with
over 2,000 high-quality rewritten explanations. We compare two baseline
approaches to perform explanation generation based on E-ReDial. Experimental
results suggest that models trained on E-ReDial can significantly improve
explainability while introducing knowledge into the models can further improve
the performance. GPT-3 in the in-context learning setting can generate more
realistic and diverse movie descriptions. In contrast, T5 training on E-ReDial
can better generate clear reasons for recommendations based on user
preferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial
Improving Conversational Recommendation Systems via Counterfactual Data Simulation
Conversational recommender systems (CRSs) aim to provide recommendation
services via natural language conversations. Although a number of approaches
have been proposed for developing capable CRSs, they typically rely on
sufficient training data for training. Since it is difficult to annotate
recommendation-oriented dialogue datasets, existing CRS approaches often suffer
from the issue of insufficient training due to the scarcity of training data.
To address this issue, in this paper, we propose a CounterFactual data
simulation approach for CRS, named CFCRS, to alleviate the issue of data
scarcity in CRSs. Our approach is developed based on the framework of
counterfactual data augmentation, which gradually incorporates the rewriting to
the user preference from a real dialogue without interfering with the entire
conversation flow. To develop our approach, we characterize user preference and
organize the conversation flow by the entities involved in the dialogue, and
design a multi-stage recommendation dialogue simulator based on a conversation
flow language model. Under the guidance of the learned user preference and
dialogue schema, the flow language model can produce reasonable, coherent
conversation flows, which can be further realized into complete dialogues.
Based on the simulator, we perform the intervention at the representations of
the interacted entities of target users, and design an adversarial training
method with a curriculum schedule that can gradually optimize the data
augmentation strategy. Extensive experiments show that our approach can
consistently boost the performance of several competitive CRSs, and outperform
other data augmentation methods, especially when the training data is limited.
Our code is publicly available at https://github.com/RUCAIBox/CFCRS.Comment: Accepted by KDD 2023. Code: https://github.com/RUCAIBox/CFCR
A Conversation is Worth A Thousand Recommendations: A Survey of Holistic Conversational Recommender Systems
Conversational recommender systems (CRS) generate recommendations through an
interactive process. However, not all CRS approaches use human conversations as
their source of interaction data; the majority of prior CRS work simulates
interactions by exchanging entity-level information. As a result, claims of
prior CRS work do not generalise to real-world settings where conversations
take unexpected turns, or where conversational and intent understanding is not
perfect. To tackle this challenge, the research community has started to
examine holistic CRS, which are trained using conversational data collected
from real-world scenarios. Despite their emergence, such holistic approaches
are under-explored.
We present a comprehensive survey of holistic CRS methods by summarizing the
literature in a structured manner. Our survey recognises holistic CRS
approaches as having three components: 1) a backbone language model, the
optional use of 2) external knowledge, and/or 3) external guidance. We also
give a detailed analysis of CRS datasets and evaluation methods in real
application scenarios. We offer our insight as to the current challenges of
holistic CRS and possible future trends.Comment: Accepted by 5th KaRS Workshop @ ACM RecSys 2023, 8 page
Evaluating Conversational Recommender Systems: A Landscape of Research
Conversational recommender systems aim to interactively support online users
in their information search and decision-making processes in an intuitive way.
With the latest advances in voice-controlled devices, natural language
processing, and AI in general, such systems received increased attention in
recent years. Technically, conversational recommenders are usually complex
multi-component applications and often consist of multiple machine learning
models and a natural language user interface. Evaluating such a complex system
in a holistic way can therefore be challenging, as it requires (i) the
assessment of the quality of the different learning components, and (ii) the
quality perception of the system as a whole by users. Thus, a mixed methods
approach is often required, which may combine objective (computational) and
subjective (perception-oriented) evaluation techniques. In this paper, we
review common evaluation approaches for conversational recommender systems,
identify possible limitations, and outline future directions towards more
holistic evaluation practices
Conversational Recommender System and Large Language Model Are Made for Each Other in E-commerce Pre-sales Dialogue
E-commerce pre-sales dialogue aims to understand and elicit user needs and
preferences for the items they are seeking so as to provide appropriate
recommendations. Conversational recommender systems (CRSs) learn user
representation and provide accurate recommendations based on dialogue context,
but rely on external knowledge. Large language models (LLMs) generate responses
that mimic pre-sales dialogues after fine-tuning, but lack domain-specific
knowledge for accurate recommendations. Intuitively, the strengths of LLM and
CRS in E-commerce pre-sales dialogues are complementary, yet no previous work
has explored this. This paper investigates the effectiveness of combining LLM
and CRS in E-commerce pre-sales dialogues, proposing two collaboration methods:
CRS assisting LLM and LLM assisting CRS. We conduct extensive experiments on a
real-world dataset of Ecommerce pre-sales dialogues. We analyze the impact of
two collaborative approaches with two CRSs and two LLMs on four tasks of
Ecommerce pre-sales dialogue. We find that collaborations between CRS and LLM
can be very effective in some cases.Comment: EMNLP 2023 Finding
Evaluating Conversational Recommender Systems via User Simulation
Conversational information access is an emerging research area. Currently,
human evaluation is used for end-to-end system evaluation, which is both very
time and resource intensive at scale, and thus becomes a bottleneck of
progress. As an alternative, we propose automated evaluation by means of
simulating users. Our user simulator aims to generate responses that a real
human would give by considering both individual preferences and the general
flow of interaction with the system. We evaluate our simulation approach on an
item recommendation task by comparing three existing conversational recommender
systems. We show that preference modeling and task-specific interaction models
both contribute to more realistic simulations, and can help achieve high
correlation between automatic evaluation measures and manual human assessments.Comment: Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery
and Data Mining (KDD '20), 202
- …