1,238 research outputs found

    Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions

    Full text link
    We consider the paradigm of a black box AI system that makes life-critical decisions. We propose an "arguing machines" framework that pairs the primary AI system with a secondary one that is independently trained to perform the same task. We show that disagreement between the two systems, without any knowledge of underlying system design or operation, is sufficient to arbitrarily improve the accuracy of the overall decision pipeline given human supervision over disagreements. We demonstrate this system in two applications: (1) an illustrative example of image classification and (2) on large-scale real-world semi-autonomous driving data. For the first application, we apply this framework to image classification achieving a reduction from 8.0% to 2.8% top-5 error on ImageNet. For the second application, we apply this framework to Tesla Autopilot and demonstrate the ability to predict 90.4% of system disengagements that were labeled by human annotators as challenging and needing human supervision

    Article Search Tool and Topic Classifier

    Get PDF
    This thesis focuses on 3 main tasks related to Document Recommendations. The first approach deals with applying existing techniques on Document Recommendations using Doc2Vec. A robust representation of the same is presented to understand how noise induced in the embedding space affects predictions of the recommendations. The next phase focuses on improving the above recommendations using a Topic Classifier. A Hierarchical Attention Network is employed for this purpose. In order to increase the accuracy of prediction, this work establishes a relation to embedding size of the words in the article. In the last phase, model-agnostic Explainable AI (XAI) techniques are implemented to prove the findings in this thesis. XAI techniques are also employed to show how we can fine tune model hyper-parameters for a black-box model

    Knowing your FATE: Friendship, Action and Temporal Explanations for User Engagement Prediction on Social Apps

    Full text link
    With the rapid growth and prevalence of social network applications (Apps) in recent years, understanding user engagement has become increasingly important, to provide useful insights for future App design and development. While several promising neural modeling approaches were recently pioneered for accurate user engagement prediction, their black-box designs are unfortunately limited in model explainability. In this paper, we study a novel problem of explainable user engagement prediction for social network Apps. First, we propose a flexible definition of user engagement for various business scenarios, based on future metric expectations. Next, we design an end-to-end neural framework, FATE, which incorporates three key factors that we identify to influence user engagement, namely friendships, user actions, and temporal dynamics to achieve explainable engagement predictions. FATE is based on a tensor-based graph neural network (GNN), LSTM and a mixture attention mechanism, which allows for (a) predictive explanations based on learned weights across different feature categories, (b) reduced network complexity, and (c) improved performance in both prediction accuracy and training/inference time. We conduct extensive experiments on two large-scale datasets from Snapchat, where FATE outperforms state-of-the-art approaches by ≈10%{\approx}10\% error and ≈20%{\approx}20\% runtime reduction. We also evaluate explanations from FATE, showing strong quantitative and qualitative performance.Comment: Accepted to KDD 2020 Applied Data Science Trac

    Opening the Black-Box of AI: Challenging Pattern Robustness and Improving Theorizing through Explainable AI Methods

    Get PDF
    Machine Learning (ML) algorithms, as approach to Artificial Intelligence (AI), show unprecedented analytical capabilities and tremendous potential for pattern detection in large data sets. Despite researchers showing great interest in these methodologies, ML remains largely underutilized, because the algorithms are a black-box, preventing the interpretation of learned models. Recent research on explainable artificial intelligence (XAI) sheds light on these models by allowing researchers to identify the main determinants of a prediction through post-hoc analyses. Thereby, XAI affords the opportunity to critically reflect on identified patterns, offering the opportunity to enhance decision making and theorizing based on these patterns. Based on two large and publicly available data sets, we show that different variables within the same data set can generate models with similar predictive accuracy. In exploring this issue, we develop guidelines and recommendations for the effective use of XAI in research and particularly for theorizing from identified patterns

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Deep Quantum Graph Dreaming: Deciphering Neural Network Insights into Quantum Experiments

    Full text link
    Despite their promise to facilitate new scientific discoveries, the opaqueness of neural networks presents a challenge in interpreting the logic behind their findings. Here, we use a eXplainable-AI (XAI) technique called inceptioninception or deepdeep dreamingdreaming, which has been invented in machine learning for computer vision. We use this technique to explore what neural networks learn about quantum optics experiments. Our story begins by training deep neural networks on the properties of quantum systems. Once trained, we "invert" the neural network -- effectively asking how it imagines a quantum system with a specific property, and how it would continuously modify the quantum system to change a property. We find that the network can shift the initial distribution of properties of the quantum system, and we can conceptualize the learned strategies of the neural network. Interestingly, we find that, in the first layers, the neural network identifies simple properties, while in the deeper ones, it can identify complex quantum structures and even quantum entanglement. This is in reminiscence of long-understood properties known in computer vision, which we now identify in a complex natural science task. Our approach could be useful in a more interpretable way to develop new advanced AI-based scientific discovery techniques in quantum physics.Comment: Modified Figure 2. Fixed minor typo
    • 

    corecore