40,603 research outputs found

    Visually-Aware Personalized Recommendation using Interpretable Image Representations

    Full text link
    Visually-aware recommender systems use visual signals present in the underlying data to model the visual characteristics of items and users' preferences towards them. In the domain of clothing recommendation, incorporating items' visual information (e.g., product images) is particularly important since clothing item appearance is often a critical factor in influencing the user's purchasing decisions. Current state-of-the-art visually-aware recommender systems utilize image features extracted from pre-trained deep convolutional neural networks, however these extremely high-dimensional representations are difficult to interpret, especially in relation to the relatively low number of visual properties that may guide users' decisions. In this paper we propose a novel approach to personalized clothing recommendation that models the dynamics of individual users' visual preferences. By using interpretable image representations generated with a unique feature learning process, our model learns to explain users' prior feedback in terms of their affinity towards specific visual attributes and styles. Our approach achieves state-of-the-art performance on personalized ranking tasks, and the incorporation of interpretable visual features allows for powerful model introspection, which we demonstrate by using an interactive recommendation algorithm and visualizing the rise and fall of fashion trends over time.Comment: AI for Fashion workshop, held in conjunction with KDD 2018, London. 4 page

    Maintaining The Humanity of Our Models

    Full text link
    Artificial intelligence and machine learning have been major research interests in computer science for the better part of the last few decades. However, all too recently, both AI and ML have rapidly grown to be media frenzies, pressuring companies and researchers to claim they use these technologies. As ML continues to percolate into daily life, we, as computer scientists and machine learning researchers, are responsible for ensuring we clearly convey the extent of our work and the humanity of our models. Regularizing ML for mass adoption requires a rigorous standard for model interpretability, a deep consideration for human bias in data, and a transparent understanding of a model's societal effects.Comment: Accepted into the 2018 AAAI Spring Symposium: AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agent

    Open the Black Box Data-Driven Explanation of Black Box Decision Systems

    Full text link
    Black box systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases hidden in the algorithms, due to human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations

    Try This Instead: Personalized and Interpretable Substitute Recommendation

    Full text link
    As a fundamental yet significant process in personalized recommendation, candidate generation and suggestion effectively help users spot the most suitable items for them. Consequently, identifying substitutable items that are interchangeable opens up new opportunities to refine the quality of generated candidates. When a user is browsing a specific type of product (e.g., a laptop) to buy, the accurate recommendation of substitutes (e.g., better equipped laptops) can offer the user more suitable options to choose from, thus substantially increasing the chance of a successful purchase. However, existing methods merely treat this problem as mining pairwise item relationships without the consideration of users' personal preferences. Moreover, the substitutable relationships are implicitly identified through the learned latent representations of items, leading to uninterpretable recommendation results. In this paper, we propose attribute-aware collaborative filtering (A2CF) to perform substitute recommendation by addressing issues from both personalization and interpretability perspectives. Instead of directly modelling user-item interactions, we extract explicit and polarized item attributes from user reviews with sentiment analysis, whereafter the representations of attributes, users, and items are simultaneously learned. Then, by treating attributes as the bridge between users and items, we can thoroughly model the user-item preferences (i.e., personalization) and item-item relationships (i.e., substitution) for recommendation. In addition, A2CF is capable of generating intuitive interpretations by analyzing which attributes a user currently cares the most and comparing the recommended substitutes with her/his currently browsed items at an attribute level. The recommendation effectiveness and interpretation quality of A2CF are demonstrated via extensive experiments on three real datasets.Comment: To appear in SIGIR'2

    Techniques for Interpretable Machine Learning

    Full text link
    Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision. Although many approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. We provide a survey covering existing techniques to increase the interpretability of machine learning models. We also discuss crucial issues that the community should consider in future work such as designing user-friendly explanations and developing comprehensive evaluation metrics to further push forward the area of interpretable machine learning.Comment: Accepted by Communications of the ACM (CACM), Review Articl

    Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

    Full text link
    Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to \textit{explain} black box models, rather than creating models that are \textit{interpretable} in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.Comment: Author's pre-publication version of a 2019 Nature Machine Intelligence article. Shorter Version was published in NIPS 2018 Workshop on Critiquing and Correcting Trends in Machine Learning. Expands also on NSF Statistics at a Crossroads Webina

    Explainability in Human-Agent Systems

    Full text link
    This paper presents a taxonomy of explainability in Human-Agent Systems. We consider fundamental questions about the Why, Who, What, When and How of explainability. First, we define explainability, and its relationship to the related terms of interpretability, transparency, explicitness, and faithfulness. These definitions allow us to answer why explainability is needed in the system, whom it is geared to and what explanations can be generated to meet this need. We then consider when the user should be presented with this information. Last, we consider how objective and subjective measures can be used to evaluate the entire system. This last question is the most encompassing as it will need to evaluate all other issues regarding explainability

    What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

    Full text link
    Translating machine learning (ML) models effectively to clinical practice requires establishing clinicians' trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the field suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians' trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare

    Explainable Machine Learning for Scientific Insights and Discoveries

    Full text link
    Machine learning methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of machine learning in the natural sciences, where the major goal is to obtain novel scientific insights and discoveries from observational or simulated data. A prerequisite for obtaining a scientific outcome is domain knowledge, which is needed to gain explainability, but also to enhance scientific consistency. In this article we review explainable machine learning in view of applications in the natural sciences and discuss three core elements which we identified as relevant in this context: transparency, interpretability, and explainability. With respect to these core elements, we provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas

    From Physics-Based Models to Predictive Digital Twins via Interpretable Machine Learning

    Full text link
    This work develops a methodology for creating a data-driven digital twin from a library of physics-based models representing various asset states. The digital twin is updated using interpretable machine learning. Specifically, we use optimal trees---a recently developed scalable machine learning method---to train an interpretable data-driven classifier. Training data for the classifier are generated offline using simulated scenarios solved by the library of physics-based models. These data can be further augmented using experimental or other historical data. In operation, the classifier uses observational data from the asset to infer which physics-based models in the model library are the best candidates for the updated digital twin. The approach is demonstrated through the development of a structural digital twin for a 12ft wingspan unmanned aerial vehicle. This digital twin is built from a library of reduced-order models of the vehicle in a range of structural states. The data-driven digital twin dynamically updates in response to structural damage or degradation and enables the aircraft to replan a safe mission accordingly. Within this context, we study the performance of the optimal tree classifiers and demonstrate how their interpretability enables explainable structural assessments from sparse sensor measurements, and also informs optimal sensor placement.Comment: 20 pages, 13 figures, submitted to AIAA Journa
    • …
    corecore