254 research outputs found

    Human-Understandable Explanations of Neural Networks

    Get PDF
    Das 21. Jahrhundert ist durch Datenströme enormen Ausmaßes gekennzeichnet. Dies hat die Popularität von Berechnungsmodellen, die sehr datenintensiv sind, wie z.B. neuronale Netze, drastisch erhöht. Aufgrund ihres großen Erfolges bei der Mustererkennung sind sie zu einem leistungsstarken Werkzeug für Vorhersagen, Klassifizierung und Empfehlungen in der Informatik, Statistik, Wirtschaft und vielen anderen Disziplinen geworden. Trotz dieser verbreiteten Anwendung sind neuronale Netze Blackbox-Modelle, d.h. sie geben keine leicht interpretierbaren Einblicke in die Struktur der approximierten Funktion oder in die Art und Weise, wie die Eingabe in die entsprechende Ausgabe umgewandelt wird. Die jüngste Forschung versucht, diese Blackboxen zu öffnen und ihr Innenleben zu enthüllen. Bisher haben sich die meisten Forschungsarbeiten darauf konzentriert, die Entscheidungen eines neuronalen Netzes auf einer sehr technischen Ebene und für ein Informatikfachpublikum zu erklären. Da neuronale Netze immer häufiger eingesetzt werden, auch von Menschen ohne tiefere Informatikkenntnisse, ist es von entscheidender Bedeutung, Ansätze zu entwickeln, die es ermöglichen, neuronale Netze auch für Nicht-Experten verständlich zu erklären. Das Ziel ist, dass Menschen verstehen können, warum das neuronale Netz bestimmte Entscheidungen getroffen hat, und dass sie das Ergebnis des Modells durchgehend interpretieren können. Diese Arbeit beschreibt ein Rahmenwerk, das es ermöglicht, menschlich verständliche Erklärungen für neuronale Netze zu liefern. Wir charakterisieren menschlich nachvollziehbare Erklärungen durch sieben Eigenschaften, nämlich Transparenz, Überprüfbarkeit, Vertrauen, Effektivität, Überzeugungskraft, Effizienz und Zufriedenheit. In dieser Arbeit stellen wir Erklärungsansätze vor, die diese Eigenschaften erfüllen. Zunächst stellen wir TransPer vor, ein Erklärungsrahmenwerk für neuronale Netze, insbesondere für solche, die in Produktempfehlungssystemen verwendet werden. Wir definieren Erklärungsmaße auf der Grundlage der Relevanz der Eingaben, um die Vorhersagequalität des neuronalen Netzes zu analysieren und KI-Anwendern bei der Verbesserung ihrer neuronalen Netze zu helfen. Dadurch werden Transparenz und Vertrauen geschaffen. In einem Anwendungsfall für ein Empfehlungssystem werden auch die Überzeugungskraft, die den Benutzer zum Kauf eines Produkts veranlasst, und die Zufriedenheit, die das Benutzererlebnis angenehmer macht, berücksichtigt. Zweitens, um die Blackbox des neuronalen Netzes zu öffnen, definieren wir eine neue Metrik für die Erklärungsqualität ObAlEx in der Bildklassifikation. Mit Hilfe von Objekterkennungsansätzen, Erklärungsansätzen und ObAlEx quantifizieren wir den Fokus von faltenden neuronalen Netzwerken auf die tatsächliche Evidenz. Dies bietet den Nutzern eine effektive Erklärung und Vertrauen, dass das Modell seine Klassifizierungsentscheidung tatsächlich auf der Grundlage des richtigen Teils des Eingabebildes getroffen hat. Darüber hinaus ermöglicht es die Überprüfbarkeit, d. h. die Möglichkeit für den Benutzer, dem Erklärungssystem mitzuteilen, dass sich das Modell auf die falschen Teile des Eingabebildes konzentriert hat. Drittens schlagen wir FilTag vor, einen Ansatz zur Erklärung von faltenden neuronalen Netzwerken durch die Kennzeichnung der Filter mit Schlüsselwörtern, die Bildklassen identifizieren. In ihrer Gesamtheit erklären diese Kennzeichnungen die Zweckbestimmung des Filters. Einzelne Bildklassifizierungen können dann intuitiv anhand der Kennzeichnungen der Filter, die das Eingabebild aktiviert, erklärt werden. Diese Erklärungen erhöhen die Überprüfbarkeit und das Vertrauen. Schließlich stellen wir FAIRnets vor, das darauf abzielt, Metadaten von neuronalen Netzen wie Architekturinformationen und Verwendungszweck bereitzustellen. Indem erklärt wird, wie das neuronale Netz aufgebaut ist werden neuronale Netzer transparenter; dadurch dass ein Nutzer schnell entscheiden kann, ob das neuronale Netz für den gewünschten Anwendungsfall relevant ist werden neuronale Netze effizienter. Alle vier Ansätze befassen sich mit der Frage, wie man Erklärungen von neuronalen Netzen für Nicht-Experten bereitstellen kann. Zusammen stellen sie einen wichtigen Schritt in Richtung einer für den Menschen verständlichen KI dar

    M2GRL: A Multi-task Multi-view Graph Representation Learning Framework for Web-scale Recommender Systems

    Full text link
    Combining graph representation learning with multi-view data (side information) for recommendation is a trend in industry. Most existing methods can be categorized as \emph{multi-view representation fusion}; they first build one graph and then integrate multi-view data into a single compact representation for each node in the graph. However, these methods are raising concerns in both engineering and algorithm aspects: 1) multi-view data are abundant and informative in industry and may exceed the capacity of one single vector, and 2) inductive bias may be introduced as multi-view data are often from different distributions. In this paper, we use a \emph{multi-view representation alignment} approach to address this issue. Particularly, we propose a multi-task multi-view graph representation learning framework (M2GRL) to learn node representations from multi-view graphs for web-scale recommender systems. M2GRL constructs one graph for each single-view data, learns multiple separate representations from multiple graphs, and performs alignment to model cross-view relations. M2GRL chooses a multi-task learning paradigm to learn intra-view representations and cross-view relations jointly. Besides, M2GRL applies homoscedastic uncertainty to adaptively tune the loss weights of tasks during training. We deploy M2GRL at Taobao and train it on 57 billion examples. According to offline metrics and online A/B tests, M2GRL significantly outperforms other state-of-the-art algorithms. Further exploration on diversity recommendation in Taobao shows the effectiveness of utilizing multiple representations produced by \method{}, which we argue is a promising direction for various industrial recommendation tasks of different focus.Comment: Accepted by KDD 2020 ads track as an oral paper. Code address:https://github.com/99731/M2GR

    Beyond-accuracy: a review on diversity, serendipity, and fairness in recommender systems based on graph neural networks

    Get PDF
    By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond-accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction. This review paper focuses on addressing these dimensions in GNN-based recommender systems, going beyond the conventional accuracy-centric perspective. We begin by reviewing recent developments in approaches that improve not only the accuracy-diversity trade-off but also promote serendipity, and fairness in GNN-based recommender systems. We discuss different stages of model development including data preprocessing, graph construction, embedding initialization, propagation layers, embedding fusion, score computation, and training methodologies. Furthermore, we present a look into the practical difficulties encountered in assuring diversity, serendipity, and fairness, while retaining high accuracy. Finally, we discuss potential future research directions for developing more robust GNN-based recommender systems that go beyond the unidimensional perspective of focusing solely on accuracy. This review aims to provide researchers and practitioners with an in-depth understanding of the multifaceted issues that arise when designing GNN-based recommender systems, setting our work apart by offering a comprehensive exploration of beyond-accuracy dimensions

    Understanding and Mitigating Multi-sided Exposure Bias in Recommender Systems

    Get PDF
    Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.Comment: Doctoral thesi

    Recommendation Systems: An Insight Into Current Development and Future Research Challenges

    Get PDF
    Research on recommendation systems is swiftly producing an abundance of novel methods, constantly challenging the current state-of-the-art. Inspired by advancements in many related fields, like Natural Language Processing and Computer Vision, many hybrid approaches based on deep learning are being proposed, making solid improvements over traditional methods. On the downside, this flurry of research activity, often focused on improving over a small number of baselines, makes it hard to identify reference methods and standardized evaluation protocols. Furthermore, the traditional categorization of recommendation systems into content-based, collaborative filtering and hybrid systems lacks the informativeness it once had. With this work, we provide a gentle introduction to recommendation systems, describing the task they are designed to solve and the challenges faced in research. Building on previous work, an extension to the standard taxonomy is presented, to better reflect the latest research trends, including the diverse use of content and temporal information. To ease the approach toward the technical methodologies recently proposed in this field, we review several representative methods selected primarily from top conferences and systematically describe their goals and novelty. We formalize the main evaluation metrics adopted by researchers and identify the most commonly used benchmarks. Lastly, we discuss issues in current research practices by analyzing experimental results reported on three popular datasets

    Modeling and debiasing feedback loops in collaborative filtering recommender systems.

    Get PDF
    Artificial Intelligence (AI)-driven recommender systems have been gaining increasing ubiquity and influence in our daily lives, especially during time spent online on the World Wide Web or smart devices. The influence of recommender systems on who and what we can find and discover, our choices, and our behavior, has thus never been more concrete. AI can now predict and anticipate, with varying degrees of accuracy, the news article we will read, the music we will listen to, the movies we will watch, the transactions we will make, the restaurants we will eat in, the online courses we will be interested in, and the people we will connect with for various ends and purposes. For all these reasons, the automated predictions and recommendations made by AI can lead to influencing and changing human opinions, behavior, and decision making. When the AI predictions are biased, the influences can have unfair consequences on society, ranging from social polarization to the amplification of misinformation and hate speech. For instance, bias in recommender systems can affect the decision making and shift consumer behavior in an unfair way due to a phenomenon known as the feedback loop. The feedback loop is an inherent component of recommender systems because the latter are dynamic systems that involve continuous interactions with the users, whereby data collected to train a recommender system model is usually affected by the outputs of a previously trained model. This feedback loop is expected to affect the performance of the system. For instance, it can amplify initial bias in the data or model and can lead to other phenomena such as filter bubbles, polarization, and popularity bias. Up to now, it has been difficult to understand the dynamics of recommender system feedback loops, and equally challenging to evaluate the bias and filter bubbles emerging from recommender system models within such an iterative closed loop environment. In this dissertation, we study the feedback loop in the context of Collaborative Filtering (CF) recommender systems. CF systems comprise the leading family of recommender systems that rely mainly on mining the patterns of interaction between the users and items to train models that aim to predict future user interactions. Our research contributions target three aspects of recommendation, namely modeling, debiasing and evaluating feedback loops. Our research advances the state of the art in Fairness in Artificial Intelligence on several fronts: (1) We propose and validate a new theoretical model, based on Martingale differences, to model the recommender system feedback loop, and allow a better understanding of the dynamics of filter bubbles and user discovery. (2) We propose a Transformer-based deep learning architecture and algorithm to learn diverse representations for users and items in order to increase the diversity in the recommendations. Our evaluation experiments on real world datasets demonstrate that our transformer model recommends 14\% more diverse items and improves the novelty of the recommendation by more than 20\%. (3) We propose a new simulation and experimentation framework that allows studying and tracking the evolution of bias metrics in a feedback loop setting, for a variety of recommendation modeling algorithms. Our preliminary findings, using the new simulation framework show that recommender systems are deeply affected by the feedback loop, and that without an adequate debiasing or exploration strategy, this feedback loop limits the discovery of the user and increases the disparity in exposure between items that can be recommended. To help the research and practice community in studying recommender system fairness, all the tools developed to model, debias, and evaluate recommender systems are made available to the public as open source software libraries \footnote{https://github.com/samikhenissi/TheoretUserModeling}. (4) We propose a novel learnable dynamic debiasing strategy that learns an optimal rescaling parameter for the predicted rating and achieves a better trade-off between accuracy and debiasing. We focus on solving the popularity bias of the items and test our method using our proposed simulation framework and show the effectiveness of using a learnable debiasing degree to produce better results

    Recent Advances of Differential Privacy in Centralized Deep Learning: A Systematic Survey

    Full text link
    Differential Privacy has become a widely popular method for data protection in machine learning, especially since it allows formulating strict mathematical privacy guarantees. This survey provides an overview of the state-of-the-art of differentially private centralized deep learning, thorough analyses of recent advances and open problems, as well as a discussion of potential future developments in the field. Based on a systematic literature review, the following topics are addressed: auditing and evaluation methods for private models, improvements of privacy-utility trade-offs, protection against a broad range of threats and attacks, differentially private generative models, and emerging application domains.Comment: 35 pages, 2 figure
    corecore