110 research outputs found

    Topic-Centric Explanations for News Recommendation

    Full text link
    News recommender systems (NRS) have been widely applied for online news websites to help users find relevant articles based on their interests. Recent methods have demonstrated considerable success in terms of recommendation performance. However, the lack of explanation for these recommendations can lead to mistrust among users and lack of acceptance of recommendations. To address this issue, we propose a new explainable news model to construct a topic-aware explainable recommendation approach that can both accurately identify relevant articles and explain why they have been recommended, using information from associated topics. Additionally, our model incorporates two coherence metrics applied to assess topic quality, providing measure of the interpretability of these explanations. The results of our experiments on the MIND dataset indicate that the proposed explainable NRS outperforms several other baseline systems, while it is also capable of producing interpretable topics compared to those generated by a classical LDA topic model. Furthermore, we present a case study through a real-world example showcasing the usefulness of our NRS for generating explanations.Comment: 20 pages, submitted to a journa

    'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions

    Full text link
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.Comment: 14 pages, 3 figures, ACM Conference on Human Factors in Computing Systems (CHI'18), April 21--26, Montreal, Canad

    'It's Reducing a Human Being to a Percentage'; Perceptions of Procedural Justice in Algorithmic Decisions

    Get PDF
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions

    Enhancing explainability and scrutability of recommender systems

    Get PDF
    Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: • We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ profiles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. • We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for finding the smallest counterfactual explanations. • We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-specific item representations. We evaluate all proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.Unsere zunehmende Abhängigkeit von komplexen Algorithmen für maschinelle Empfehlungen erfordert Modelle und Methoden für erklärbare, nachvollziehbare und vertrauenswürdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklärbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen ändern, muss dessen Entscheidungsprozess nachvollziehbar sein. Erklärbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die Lücke zwischen dem von uns erwarteten und dem tatsächlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stärken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu filtern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene Erklärungen für deren personalisierte Empfehlungen. Diese Erklärungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre früheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können Erklärungen für den Fall, dass unerwünschte Inhalte empfohlen werden, wertvolle Informationen darüber enthalten, wie das Verhalten des Systems entsprechend geändert werden kann. In dieser Dissertation stellen wir unsere Beiträge zu Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. • Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc Erklärungen für die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese Erklärungen zeigen Beziehungen zwischen Benutzerprofilen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die Erklärungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. • Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von Erklärungen für PageRank-basierte Empfehlungsdienste. PRINCE-Erklärungen sind für Benutzer verständlich, da sie Teilmengen früherer Nutzerinteraktionen darstellen, die für die erhaltenen Empfehlungen verantwortlich sind. PRINCE-Erklärungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um präzise Erklärungen zu finden. • Wir präsentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die Qualität der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und Erklärungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspezifischer Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten

    Role of emotion in information retrieval

    Get PDF
    The main objective of Information Retrieval (IR) systems is to satisfy searchers’ needs. A great deal of research has been conducted in the past to attempt to achieve a better insight into searchers’ needs and the factors that can potentially influence the success of an Information Retrieval and Seeking (IR&S) process. One of the factors which has been considered is searchers’ emotion. It has been shown in previous research that emotion plays an important role in the success of an IR&S process, which has the purpose of satisfying an information need. However, these previous studies do not give a sufficiently prominent position to emotion in IR, since they limit the role of emotion to a secondary factor, by assuming that a lack of knowledge (the need for information) is the primary factor (the motivation of the search). In this thesis, we propose to treat emotion as the principal factor in the system of needs of a searcher, and therefore one that ought to be considered by the retrieval algorithms. We present a more realistic view of searchers’ needs by considering not only theories from information retrieval and science, but also from psychology, philosophy, and sociology. We extensively report on the role of emotion in every aspect of human behaviour, both at an individual and social level. This serves not only to modify the current IR views of emotion, but more importantly to uncover social situations where emotion is the primary factor (i.e., source of motivation) in an IR&S process. We also show that the emotion aspect of documents plays an important part in satisfying the searcher’s need, in particular when emotion is indeed a primary factor. Given the above, we define three concepts, called emotion need, emotion object and emotion relevance, and present a conceptual map that utilises these concepts in IR tasks and scenarios. In order to investigate the practical concepts such as emotion object and emotion relevance in a real-life application, we first study the possibility of extracting emotion from text, since this is the first pragmatic challenge to be solved before any IR task can be tackled. For this purpose, we developed a text-based emotion extraction system and demonstrate that it outperforms other available emotion extraction approaches. Using the developed emotion extraction system, the usefulness of the practical concepts mentioned above is studied in two scenarios: movie recommendation and news diversification. In the movie recommendation scenario, two collaborative filtering (CF) models were proposed. CF systems aim to recommend items to a user, based on the information gathered from other users who have similar interests. CF techniques do not handle data sparsity well, especially in the case of the cold start problem, where there is no past rating for an item. In order to predict the rating of an item for a given user, the first and second models rely on an extension of state-of-the-art memory-based and model-based CF systems. The features used by the models are two emotion spaces extracted from the movie plot summary and the reviews made by users, and three semantic spaces, namely, actor, director, and genre. Experiments with two MovieLens datasets show that the inclusion of emotion information significantly improves the accuracy of prediction when compared with the state-of-the-art CF techniques, and also tackles data sparsity issues. In the news retrieval scenario, a novel way of diversifying results, i.e., diversifying based on the emotion aspect of documents, is proposed. For this purpose, two approaches are introduced to consider emotion features for diversification, and they are empirically tested on the TREC 678 Interactive Track collection. The results show that emotion features are capable of enhancing retrieval effectiveness. Overall, this thesis shows that emotion plays a key role in IR and that its importance needs to be considered. At a more detailed level, it illustrates the crucial part that emotion can play in • searchers, both as a primary (emotion need) and secondary factor (influential role) in an IR&S process; • enhancing the representation of a document using emotion features (emotion object); and finally, • improving the effectiveness of IR systems at satisfying searchers’ needs (emotion relevance)

    An analysis of popularity biases in recommender system evaluation and algorithms

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 03-10-2019Las tecnologías de recomendación han ido progresivamente extendiendo su presencia en las aplicaciones y servicios de uso diario. Los sistemas de recomendación buscan realizar sugerencias individualizadas de productos u opciones que los usuarios puedan encontrar interesantes o útiles. Implícita en el concepto de recomendación está la idea de que las sugerencias más satisfactorias para cada usuario son aquellas que tienen en cuenta sus gustos particulares, por lo que cabría esperar que los algoritmos de recomendación más eficaces sean los más personalizados. Sin embargo, se ha observado recientemente que recomendar simplemente los productos más populares no resulta una estrategia mucho peor que los mejores y más sofisticados algoritmos personalizados, y más aún, que estos tienden a sesgar sus recomendaciones hacia opciones mayoritarias. Por todo ello, es rele-vante entender en qué medida y bajo qué circunstancias es la popularidad una señal real-mente efectiva a la hora de recomendar, y si su aparente efectividad se debe a la existencia de ciertos sesgos en las metodologías de evaluación offline actuales, como todo parece indicar, o no. En esta tesis abordamos esta cuestión desde un punto de vista plenamente formal, identificando los factores que pueden determinar la respuesta y modelizándolos en térmi-nos de dependencias probabilísticas entre variables aleatorias, tales como la votación, el descubrimiento y la relevancia. De esta forma, caracterizamos situaciones concretas que garantizan que la popularidad sea efectiva o que no lo sea, y establecemos las condiciones bajo las cuales pueden existir contradicciones entre el acierto observado y el real. Las principales conclusiones hacen referencia a escenarios simplificados prototípicos, más allá de los cuales el análisis formal concluye que cualquier resultado es posible. Para profun-dizar en el escenario general sin suposiciones tan simplificadas, estudiamos un caso parti-cular donde el descubrimiento de ítems es consecuencia de la interacción entre usuarios en una red social. Además, en esta tesis proporcionamos una explicación formal del sesgo de populari-dad que presentan los algoritmos de filtrado colaborativo. Para ello, desarrollamos una versión probabilística del algoritmo de vecinos próximos kNN. Dicha versión evidencia además la condición fundamental que hace que kNN produzca recomendaciones perso-nalizadas y se diferencie de la popularidad pura

    Exploiting the conceptual space in hybrid recommender systems: a semantic-based approach

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, octubre de 200

    Explainable Information Retrieval: A Survey

    Full text link
    Explainable information retrieval is an emerging research area aiming to make transparent and trustworthy information retrieval systems. Given the increasing use of complex machine learning models in search systems, explainability is essential in building and auditing responsible information retrieval models. This survey fills a vital gap in the otherwise topically diverse literature of explainable information retrieval. It categorizes and discusses recent explainability methods developed for different application domains in information retrieval, providing a common framework and unifying perspectives. In addition, it reflects on the common concern of evaluating explanations and highlights open challenges and opportunities.Comment: 35 pages, 10 figures. Under revie

    Explainability for Machine Learning Models: From Data Adaptability to User Perception

    Full text link
    This thesis explores the generation of local explanations for already deployed machine learning models, aiming to identify optimal conditions for producing meaningful explanations considering both data and user requirements. The primary goal is to develop methods for generating explanations for any model while ensuring that these explanations remain faithful to the underlying model and comprehensible to the users. The thesis is divided into two parts. The first enhances a widely used rule-based explanation method. It then introduces a novel approach for evaluating the suitability of linear explanations to approximate a model. Additionally, it conducts a comparative experiment between two families of counterfactual explanation methods to analyze the advantages of one over the other. The second part focuses on user experiments to assess the impact of three explanation methods and two distinct representations. These experiments measure how users perceive their interaction with the model in terms of understanding and trust, depending on the explanations and representations. This research contributes to a better explanation generation, with potential implications for enhancing the transparency, trustworthiness, and usability of deployed AI systems.Comment: PhD Thesi
    corecore