19 research outputs found
Self-explaining AI as an alternative to interpretable AI
The ability to explain decisions made by AI systems is highly sought after,
especially in domains where human lives are at stake such as medicine or
autonomous vehicles. While it is often possible to approximate the input-output
relations of deep neural networks with a few human-understandable rules, the
discovery of the double descent phenomena suggests that such approximations do
not accurately capture the mechanism by which deep neural networks work. Double
descent indicates that deep neural networks typically operate by smoothly
interpolating between data points rather than by extracting a few high level
rules. As a result, neural networks trained on complex real world data are
inherently hard to interpret and prone to failure if asked to extrapolate. To
show how we might be able to trust AI despite these problems we introduce the
concept of self-explaining AI. Self-explaining AIs are capable of providing a
human-understandable explanation of each decision along with confidence levels
for both the decision and explanation. For this approach to work, it is
important that the explanation actually be related to the decision, ideally
capturing the mechanism used to arrive at the explanation. Finally, we argue it
is important that deep learning based systems include a "warning light" based
on techniques from applicability domain analysis to warn the user if a model is
asked to extrapolate outside its training distribution. For a video
presentation of this talk see https://www.youtube.com/watch?v=Py7PVdcu7WY& .Comment: 10pgs, 2 column forma
Towards Declarative Programming of Conceptual Models
This article introduces some basic functions and architectural issues, that help to build a tool for programming conceptual models, and that is not specific for a particular problem class or problem solving method. Our work is based on the KADS-method, that had to be modified in some points, to enable declarative programming of inference knowledge as well as domain knowledge. It is shown, how knowledge sources can be described as semantic network modules. Knowledge sources are instantiated from generic descriptions. All resulting semantic networks are part of a modular knowledge base, each module representing the knowledge on its own right level of granularity. Functions are introduced, that define views between semantic networks. They help connecting declarative representation of knowledge sources on the inference layer to parts of the domain layer network. We only contemplate the interconnection of domain and inference layer. 1. Introduction 1.1. Notions First, to avoid misconception..