16,360 research outputs found

    Information Bottleneck

    Get PDF
    The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence

    Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning

    Get PDF
    Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems

    Usefulness of Heat Map Explanations for Deep-Learning-Based Electrocardiogram Analysis

    Get PDF
    Deep neural networks are complex machine learning models that have shown promising results in analyzing high-dimensional data such as those collected from medical examinations. Such models have the potential to provide fast and accurate medical diagnoses. However, the high complexity makes deep neural networks and their predictions difficult to understand. Providing model explanations can be a way of increasing the understanding of “black box” models and building trust. In this work, we applied transfer learning to develop a deep neural network to predict sex from electrocardiograms. Using the visual explanation method Grad-CAM, heat maps were generated from the model in order to understand how it makes predictions. To evaluate the usefulness of the heat maps and determine if the heat maps identified electrocardiogram features that could be recognized to discriminate sex, medical doctors provided feedback. Based on the feedback, we concluded that, in our setting, this mode of explainable artificial intelligence does not provide meaningful information to medical doctors and is not useful in the clinic. Our results indicate that improved explanation techniques that are tailored to medical data should be developed before deep neural networks can be applied in the clinic for diagnostic purposes

    Temporal Causal Inference in Wind Turbine SCADA Data Using Deep Learning for Explainable AI

    Get PDF
    © 2020 Published under licence by IOP Publishing Ltd. Machine learning techniques have been widely used for condition-based monitoring of wind turbines using Supervisory Control & Acquisition (SCADA) data. However, many machine learning models, including neural networks, operate as black boxes: despite performing suitably well as predictive models, they are not able to identify causal associations within the data. For data-driven system to approach human-level intelligence in generating effective maintenance strategies, it is integral to discover hidden knowledge in the operational data. In this paper, we apply deep learning to discover causal relationships between multiple features (confounders) in SCADA data for faults in various sub-components from an operational turbine using convolutional neural networks (CNNs) with attention. Our technique overcomes the black box nature of conventional deep learners and identifies hidden confounders in the data through the use of temporal causal graphs. We demonstrate the effects of SCADA features on a wind turbine's operational status, and show that our technique contributes to explainable AI for wind energy applications by providing transparent and interpretable decision support
    • …
    corecore