15 research outputs found

    Tell me Why? Tell me More! Explaining Predictions, Iterated Learning Bias, and Counter-Polarization in Big Data Discovery Models

    Get PDF
    Outline: What can go Wrong in Machine Learning? Unfair Machine Learning Iterated Bias & Polarization Black Box models Tell me more: Counter-Polarization Tell me why: Explanation Generatio

    Inverse Classification for Comparison-based Interpretability in Machine Learning

    Full text link
    In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data). It proposes an instance-based approach whose principle consists in determining the minimal changes needed to alter a prediction: given a data point whose classification must be explained, the proposed method consists in identifying a close neighbour classified differently, where the closeness definition integrates a sparsity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.Comment: preprin

    An Explainable Autoencoder For Collaborative Filtering Recommendation

    Get PDF
    Autoencoders are a common building block of Deep Learning architectures, where they are mainly used for representation learning. They have also been successfully used in Collaborative Filtering (CF) recommender systems to predict missing ratings. Unfortunately, like all black box machine learning models, they are unable to explain their outputs. Hence, while predictions from an Autoencoderbased recommender system might be accurate, it might not be clear to the user why a recommendation was generated. In this work, we design an explainable recommendation system using an Autoencoder model whose predictions can be explained using the neighborhood based explanation style. Our preliminary work can be considered to be the first step towards an explainable deep learning architecture based on Autoencoders

    Explaining Latent Factor Models for Recommendation with Influence Functions

    Full text link
    Latent factor models (LFMs) such as matrix factorization achieve the state-of-the-art performance among various Collaborative Filtering (CF) approaches for recommendation. Despite the high recommendation accuracy of LFMs, a critical issue to be resolved is the lack of explainability. Extensive efforts have been made in the literature to incorporate explainability into LFMs. However, they either rely on auxiliary information which may not be available in practice, or fail to provide easy-to-understand explanations. In this paper, we propose a fast influence analysis method named FIA, which successfully enforces explicit neighbor-style explanations to LFMs with the technique of influence functions stemmed from robust statistics. We first describe how to employ influence functions to LFMs to deliver neighbor-style explanations. Then we develop a novel influence computation algorithm for matrix factorization with high efficiency. We further extend it to the more general neural collaborative filtering and introduce an approximation algorithm to accelerate influence analysis over neural network models. Experimental results on real datasets demonstrate the correctness, efficiency and usefulness of our proposed method

    Analyzing Deep Learning Algorithms for Recommender Systems

    Get PDF
    As the volume of online information increases, recommender systems have been an effective strategy to overcome information overload by giving selective recommendations based on certain criteria such as user ratings and user interactions. Recommender systems are utilized in a variety of fields, with common examples being music recommendations and product recommendations on E-Commerce websites. These systems are usually constructed using either collaborative filtering, content-based filtering, or both. The most traditional way of developing a collaborative filtering recommender system is using matrix factorization, which works by decomposing a user-item interaction matrix into the product of two lower dimensionality rectangular matrix. However, as new technologies appear, matrix factorization is often replaced by other algorithms that could perform better than in a recommendation system. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing. These successes are made possible by deep learning algorithms’ outstanding ability to learn feature representations non-linearly. The influence of deep learning is also prevalent in recommender systems, as demonstrated by its effectiveness when applied to information retrieval and recommender research. This research project performs an analysis and implementation on variants of two deep learning algorithms, autoencoder and restricted Boltzmann machines, and how they perform in recommender systems compared to matrix factorization
    corecore