412 research outputs found

    Make it personal: a social explanation system applied to group recommendations

    Get PDF
    Recommender systems help users to identify which items from a variety of choices best match their needs and preferences. In this context, explanations act as complementary information that can help users to better comprehend the system’s output and to encourage goals such as trust, confidence in decision-making or utility. In this paper we propose a Personalized Social Individual Explanation approach (PSIE). Unlike other expert systems the PSIE proposal novelly includes explanations about the system’s group recommendation and explanations about the group’s social reality with the goal of inducing a positive reaction that leads to a better perception of the received group recommendations. Among other challenges, we uncover a special need to focus on “tactful” explanations when addressing users’ personal relationships within a group and to focus on personalized reassuring explanations that encourage users to accept the presented recommendations. Besides, the resulting intelligent system significatively increases users’ intent (likelihood) to follow the recommendations, users’ satisfaction and the system’s efficiency and trustworthiness

    Scalable and interpretable product recommendations via overlapping co-clustering

    Full text link
    We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback. Our approach is applicable on very large datasets because it exhibits almost linear complexity in the input examples and the number of co-clusters. We show, both on real industrial data and on publicly available datasets, that the recommendation accuracy of our algorithm is competitive to that of state-of-art matrix factorization techniques. In addition, our technique has the advantage of offering recommendations that are textually and visually interpretable. Finally, we examine how to implement our technique efficiently on Graphical Processing Units (GPUs).Comment: In IEEE International Conference on Data Engineering (ICDE) 201

    Presenting tiered recommendations in social activity streams

    Get PDF
    Modern social networking sites offer node-centralized streams that display recent updates from the other nodes in one's network. While such social activity streams are convenient features that help alleviate information overload, they can often become overwhelming themselves, especially high-throughput streams like Twitter’s home timelines. In these cases, recommender systems can help guide users toward the content they will find most important or interesting. However, current efforts to manipulate social activity streams involve hiding updates predicted to be less engaging or reordering them to place new or more engaging content first. These modifications can lead to decreased trust in the system and an inability to consume each update in its chronological context. Instead, I propose a three-tiered approach to displaying recommendations in social activity streams that hides nothing and preserves original context by highlighting updates predicted to be most important and de-emphasizing updates predicted to be least important. This presentation design allows users easily to consume different levels of recommended items chronologically, is able to persuade users to agree with its positive recommendations more than 25% more often than the baseline, and shows no significant loss of perceived accuracy or trust when compared with a filtered stream, possibly even performing better when extreme recommendation errors are intentionally introduced. Numerous directions for future research follow from this work that can shed light on how users react to different recommendation presentation designs and explain how study of an emphasis-based approach might help improve the state of the art

    Explainability in Music Recommender Systems

    Full text link
    The most common way to listen to recorded music nowadays is via streaming platforms which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of Music Recommender Systems (MRSs) has become essential. Current real-world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content-based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders' explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply -- or need to be adapted -- to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy-based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large-scale industrial music recommender system and provide research perspectives.Comment: To appear in AI Magazine, Special Topic on Recommender Systems 202

    A general framework for intelligent recommender systems

    Get PDF
    AbstractIn this paper, we propose a general framework for an intelligent recommender system that extends the concept of a knowledge-based recommender system. The intelligent recommender system exploits knowledge, learns, discovers new information, infers preferences and criticisms, among other things. For that, the framework of an intelligent recommender system is defined by the following components: knowledge representation paradigm, learning methods, and reasoning mechanisms. Additionally, it has five knowledge models about the different aspects that we can consider during a recommendation: users, items, domain, context and criticisms. The mix of the components exploits the knowledge, updates it and infers, among other things. In this work, we implement one intelligent recommender system based on this framework, using Fuzzy Cognitive Maps (FCMs). Next, we test the performance of the intelligent recommender system with specialized criteria linked to the utilization of the knowledge in order to test the versatility and performance of the framework

    Enhancing explainability and scrutability of recommender systems

    Get PDF
    Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in ïŹltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modiïŹed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: ‱ We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ proïŹles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. ‱ We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for ïŹnding the smallest counterfactual explanations. ‱ We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speciïŹc item representations. We evaluate all proposed models and methods with real user studies and demonstrate their beneïŹts at achieving explainability and scrutability in recommender systems.Unsere zunehmende AbhĂ€ngigkeit von komplexen Algorithmen fĂŒr maschinelle Empfehlungen erfordert Modelle und Methoden fĂŒr erklĂ€rbare, nachvollziehbare und vertrauenswĂŒrdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklĂ€rbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen Ă€ndern, muss dessen Entscheidungsprozess nachvollziehbar sein. ErklĂ€rbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die LĂŒcke zwischen dem von uns erwarteten und dem tatsĂ€chlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stĂ€rken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu ïŹltern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene ErklĂ€rungen fĂŒr deren personalisierte Empfehlungen. Diese ErklĂ€rungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre frĂŒheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können ErklĂ€rungen fĂŒr den Fall, dass unerwĂŒnschte Inhalte empfohlen werden, wertvolle Informationen darĂŒber enthalten, wie das Verhalten des Systems entsprechend geĂ€ndert werden kann. In dieser Dissertation stellen wir unsere BeitrĂ€ge zu ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. ‱ Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc ErklĂ€rungen fĂŒr die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese ErklĂ€rungen zeigen Beziehungen zwischen BenutzerproïŹlen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die ErklĂ€rungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. ‱ Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von ErklĂ€rungen fĂŒr PageRank-basierte Empfehlungsdienste. PRINCE-ErklĂ€rungen sind fĂŒr Benutzer verstĂ€ndlich, da sie Teilmengen frĂŒherer Nutzerinteraktionen darstellen, die fĂŒr die erhaltenen Empfehlungen verantwortlich sind. PRINCE-ErklĂ€rungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um prĂ€zise ErklĂ€rungen zu ïŹnden. ‱ Wir prĂ€sentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die QualitĂ€t der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und ErklĂ€rungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspeziïŹscher Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten
    • 

    corecore