18 research outputs found

    Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks

    Get PDF
    Explainability in Artificial Intelligence has been revived as a topic of active research by the need of conveying safety and trust to users in the `how' and `why' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how this influences the understandability of global explanations from the users' perspective. In this paper, we show how ontologies help the understandability of global post-hoc explanations, presented in the form of symbolic models. In particular, we build on Trepan, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to include ontologies modeling domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees using a syntactic complexity measure, and through time and accuracy of responses as well as reported user confidence and understandability. The user study considers domains where explanations are critical, namely, in finance and medicine. The results show that decision trees generated with our algorithm, taking into account domain knowledge, are more understandable than those generated by standard Trepan without the use of ontologies

    Towards Knowledge-driven Distillation and Explanation of Black-box Models.

    Get PDF
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target a model-agnostic distillation approach exemplified with these two frameworks, secondly, to study how these two frameworks interact on a theoretical level, and, thirdly, to investigate use-cases in ML and AI in a comparative manner. Specifically, we envision that user-studies will help determine human understandability of explanations generated using these two frameworks

    Fast Resistive Bolometry

    Full text link

    Gutes Klima für die Zukunft. Dekarbonisierung als wichtiger Schlüssel zum nachhaltigen Bauen mit Beton

    Get PDF
    How can we manage the growing demand for housing and the increasing desire for sustainability and climate protection at the same time? And how can the various stakeholders in the concrete construction industry contribute? The present conference proceedings provide an overview of the opportunities and challenges of sustainable concrete construction and show with exciting examples which paths industry, regulators and the public sector are taking to achieve climate-neutral construction

    Gutes Klima für die Zukunft. Dekarbonisierung als wichtiger Schlüssel zum nachhaltigen Bauen mit Beton : 18. Symposium Baustoffe und Bauwerkserhaltung, Karlsruher Institut für Technologie (KIT), 10. März 2022

    Get PDF
    Wie können der steigende Bedarf nach Wohnraum und der zunehmende Wunsch nach Nachhaltigkeit und Klimaschutz gleichzeitig bewältigt werden? Und welchen Beitrag können die unterschiedlichen Beteiligten der Betonbaubranche hierbei leisten? Dieser Tagungsband gibt einen Überblick über die Chancen und Herausforderungen des nachhaltigen Bauens mit Beton und zeigt anhand spannender Beispiele auf, welche Wege Industrie, Regelsetzer und öffentliche Hand gehen, um klimaneutrales Bauen zu erreichen

    Lovington Leader, 12-03-1915

    Get PDF
    https://digitalrepository.unm.edu/lovington_leader_news/1208/thumbnail.jp

    Machine Learning for Actionable Warning Identification: A Comprehensive Survey

    Full text link
    Actionable Warning Identification (AWI) plays a crucial role in improving the usability of static code analyzers. With recent advances in Machine Learning (ML), various approaches have been proposed to incorporate ML techniques into AWI. These ML-based AWI approaches, benefiting from ML's strong ability to learn subtle and previously unseen patterns from historical data, have demonstrated superior performance. However, a comprehensive overview of these approaches is missing, which could hinder researchers/practitioners from understanding the current process and discovering potential for future improvement in the ML-based AWI community. In this paper, we systematically review the state-of-the-art ML-based AWI approaches. First, we employ a meticulous survey methodology and gather 50 primary studies from 2000/01/01 to 2023/09/01. Then, we outline the typical ML-based AWI workflow, including warning dataset preparation, preprocessing, AWI model construction, and evaluation stages. In such a workflow, we categorize ML-based AWI approaches based on the warning output format. Besides, we analyze the techniques used in each stage, along with their strengths, weaknesses, and distribution. Finally, we provide practical research directions for future ML-based AWI approaches, focusing on aspects like data improvement (e.g., enhancing the warning labeling strategy) and model exploration (e.g., exploring large language models for AWI)

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
    corecore