5,039 research outputs found
User-Controllable Recommendation via Counterfactual Retrospective and Prospective Explanations
Modern recommender systems utilize users' historical behaviors to generate
personalized recommendations. However, these systems often lack user
controllability, leading to diminished user satisfaction and trust in the
systems. Acknowledging the recent advancements in explainable recommender
systems that enhance users' understanding of recommendation mechanisms, we
propose leveraging these advancements to improve user controllability. In this
paper, we present a user-controllable recommender system that seamlessly
integrates explainability and controllability within a unified framework. By
providing both retrospective and prospective explanations through
counterfactual reasoning, users can customize their control over the system by
interacting with these explanations.
Furthermore, we introduce and assess two attributes of controllability in
recommendation systems: the complexity of controllability and the accuracy of
controllability. Experimental evaluations on MovieLens and Yelp datasets
substantiate the effectiveness of our proposed framework. Additionally, our
experiments demonstrate that offering users control options can potentially
enhance recommendation accuracy in the future. Source code and data are
available at \url{https://github.com/chrisjtan/ucr}.Comment: Accepted for presentation at 26th European Conference on Artificial
Intelligence (ECAI2023
The effects of controllability and explainability in a social recommender system
In recent years, researchers in the field of recommender systems have explored a range of advanced interfaces to improve user interactions with recommender systems. Some of the major research ideas explored in this new area include the explainability and controllability of recommendations. Controllability enables end users to participate in the recommendation process by providing various kinds of input. Explainability focuses on making the recommendation process and the reasons behind specific recommendation more clear to the users. While each of these approaches contributes to making traditional āblack-boxā recommendation more attractive and acceptable to end users, little is known about how these approaches work together. In this paper, we investigate the effects of adding user control and visual explanations in a specific context of an interactive hybrid social recommender system. We present Relevance Tuner+, a hybrid recommender system that allows the users to control the fusion of multiple recommender sources while also offering explanations of both the fusion process and each of the source recommendations. We also report the results of a controlled study (N = 50) that explores the impact of controllability and explainability in this context
User Feedback in Controllable and Explainable Social Recommender Systems: a Linguistic Analysis
Controllable and explainable intelligent user interfaces have been used to provide transparent recommendations. Many researchers have explored interfaces that support user control and provide explanations of the recommendation process and models. To extend the works to real-world decision-making scenarios, we need to understand further the usersā mental models of the enhanced system components. In this paper, we make a step in this direction by investigating a free form feedback left by users of social recommender systems to specify the reasons of selecting prompted social recommendations. With a user study involving 50 subjects (N=50), we present the linguistic changes in using controllable and explainable interfaces for a social information-seeking task. Based on our findings, we discuss design implications for controllable and explainable recommender systems
Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender System
Explainable recommender systems (RS) have traditionally followed a
one-size-fits-all approach, delivering the same explanation level of detail to
each user, without considering their individual needs and goals. Further,
explanations in RS have so far been presented mostly in a static and
non-interactive manner. To fill these research gaps, we aim in this paper to
adopt a user-centered, interactive explanation model that provides explanations
with different levels of detail and empowers users to interact with, control,
and personalize the explanations based on their needs and preferences. We
followed a user-centered approach to design interactive explanations with three
levels of detail (basic, intermediate, and advanced) and implemented them in
the transparent Recommendation and Interest Modeling Application (RIMA). We
conducted a qualitative user study (N=14) to investigate the impact of
providing interactive explanations with varying level of details on the users'
perception of the explainable RS. Our study showed qualitative evidence that
fostering interaction and giving users control in deciding which explanation
they would like to see can meet the demands of users with different needs,
preferences, and goals, and consequently can have positive effects on different
crucial aspects in explainable recommendation, including transparency, trust,
satisfaction, and user experience.Comment: 23 page
LIMEADE: A General Framework for Explanation-Based Human Tuning of Opaque Machine Learners
Research in human-centered AI has shown the benefits of systems that can
explain their predictions. Methods that allow humans to tune a model in
response to the explanations are similarly useful. While both capabilities are
well-developed for transparent learning models (e.g., linear models and GA2Ms),
and recent techniques (e.g., LIME and SHAP) can generate explanations for
opaque models, no method for tuning opaque models in response to explanations
has been user-tested to date. This paper introduces LIMEADE, a general
framework for tuning an arbitrary machine learning model based on an
explanation of the model's prediction. We demonstrate the generality of our
approach with two case studies. First, we successfully utilize LIMEADE for the
human tuning of opaque image classifiers. Second, we apply our framework to a
neural recommender system for scientific papers on a public website and report
on a user study showing that our framework leads to significantly higher
perceived user control, trust, and satisfaction. Analyzing 300 user logs from
our publicly-deployed website, we uncover a tradeoff between canonical greedy
explanations and diverse explanations that better facilitate human tuning.Comment: 16 pages, 7 figure
Evaluating the effectiveness of explanations for recommender systems : Methodological issues and empirical studies on the impact of personalization
Peer reviewedPostprin
Exploring explanations for matrix factorization recommender systems (Position Paper)
In this paper we address the problem of finding explanations for collaborative filtering algorithms that use matrix factorization methods. We look for explanations that increase the transparency of the system. To do so, we propose two measures. First, we show a model that describes the contribution of each previous rating given by a user to the generated recommendation. Second, we measure then influence of changing each previous rating of a user on the outcome of the recommender system. We show that under the assumption that there are many more users in the system than there are items, we can efficiently generate each type of explanation by using linear approximations of the recommender systemās behavior for each user, and computing partial derivatives of predicted ratings with respect to each userās provided ratings.http://scholarworks.boisestate.edu/fatrec/2017/1/7/Published versio
Recommender systems and their ethical challenges
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholdersāas opposed to just the receivers of a recommendationāin assessing the ethical impacts of a recommender system
- ā¦