398 research outputs found
On Logic-Based Explainability with Partially Specified Inputs
In the practical deployment of machine learning (ML) models, missing data
represents a recurring challenge. Missing data is often addressed when training
ML models. But missing data also needs to be addressed when deciding
predictions and when explaining those predictions. Missing data represents an
opportunity to partially specify the inputs of the prediction to be explained.
This paper studies the computation of logic-based explanations in the presence
of partially specified inputs. The paper shows that most of the algorithms
proposed in recent years for computing logic-based explanations can be
generalized for computing explanations given the partially specified inputs.
One related result is that the complexity of computing logic-based explanations
remains unchanged. A similar result is proved in the case of logic-based
explainability subject to input constraints. Furthermore, the proposed solution
for computing explanations given partially specified inputs is applied to
classifiers obtained from well-known public datasets, thereby illustrating a
number of novel explainability use cases.Comment: 14 page
On Tackling Explanation Redundancy in Decision Trees
Decision trees (DTs) epitomize the ideal of interpretability of machine
learning (ML) models. The interpretability of decision trees motivates
explainability approaches by so-called intrinsic interpretability, and it is at
the core of recent proposals for applying interpretable ML models in high-risk
applications. The belief in DT interpretability is justified by the fact that
explanations for DT predictions are generally expected to be succinct. Indeed,
in the case of DTs, explanations correspond to DT paths. Since decision trees
are ideally shallow, and so paths contain far fewer features than the total
number of features, explanations in DTs are expected to be succinct, and hence
interpretable. This paper offers both theoretical and experimental arguments
demonstrating that, as long as interpretability of decision trees equates with
succinctness of explanations, then decision trees ought not be deemed
interpretable. The paper introduces logically rigorous path explanations and
path explanation redundancy, and proves that there exist functions for which
decision trees must exhibit paths with arbitrarily large explanation
redundancy. The paper also proves that only a very restricted class of
functions can be represented with DTs that exhibit no explanation redundancy.
In addition, the paper includes experimental results substantiating that path
explanation redundancy is observed ubiquitously in decision trees, including
those obtained using different tree learning algorithms, but also in a wide
range of publicly available decision trees. The paper also proposes
polynomial-time algorithms for eliminating path explanation redundancy, which
in practice require negligible time to compute. Thus, these algorithms serve to
indirectly attain irreducible, and so succinct, explanations for decision
trees
Logic-Based Explainability in Machine Learning
The last decade witnessed an ever-increasing stream of successes in Machine
Learning (ML). These successes offer clear evidence that ML is bound to become
pervasive in a wide range of practical uses, including many that directly
affect humans. Unfortunately, the operation of the most successful ML models is
incomprehensible for human decision makers. As a result, the use of ML models,
especially in high-risk and safety-critical settings is not without concern. In
recent years, there have been efforts on devising approaches for explaining ML
models. Most of these efforts have focused on so-called model-agnostic
approaches. However, all model-agnostic and related approaches offer no
guarantees of rigor, hence being referred to as non-formal. For example, such
non-formal explanations can be consistent with different predictions, which
renders them useless in practice. This paper overviews the ongoing research
efforts on computing rigorous model-based explanations of ML models; these
being referred to as formal explanations. These efforts encompass a variety of
topics, that include the actual definitions of explanations, the
characterization of the complexity of computing explanations, the currently
best logical encodings for reasoning about different ML models, and also how to
make explanations interpretable for human decision makers, among others
Computing explanations for interactive constraint-based systems
Constraint programming has emerged as a successful paradigm for modelling
combinatorial problems arising from practical situations. In many of those situations,
we are not provided with an immutable set of constraints. Instead, a user
will modify his requirements, in an interactive fashion, until he is satisfied with
a solution. Examples of such applications include, amongst others, model-based
diagnosis, expert systems, product configurators.
The system he interacts with must be able to assist him by showing the consequences
of his requirements. Explanations are the ideal tool for providing this
assistance. However, existing notions of explanations fail to provide sufficient information.
We define new forms of explanations that aim to be more informative.
Even if explanation generation is a very hard task, in the applications we consider,
we must manage to provide a satisfactory level of interactivity and, therefore, we
cannot afford long computational times.
We introduce the concept of representative sets of relaxations, a compact set of
relaxations that shows the user at least one way to satisfy each of his requirements
and at least one way to relax them, and present an algorithm that efficiently computes
such sets. We introduce the concept of most soluble relaxations, maximising
the number of products they allow. We present algorithms to compute such relaxations
in times compatible with interactivity, achieving this by indifferently making
use of different types of compiled representations. We propose to generalise
the concept of prime implicates to constraint problems with the concept of domain
consequences, and suggest to generate them as a compilation strategy. This sets a
new approach in compilation, and allows to address explanation-related queries in
an efficient way. We define ordered automata to compactly represent large sets of
domain consequences, in an orthogonal way from existing compilation techniques
that represent large sets of solutions
Explanation in constraint satisfaction: A survey
Much of the focus on explanation in the field of artificial intelligence has focused on machine learning methods and, in particular, concepts produced by advanced methods such as neural networks and deep learning. However, there has been a long history of explanation generation in the general field of constraint satisfaction, one of the AI's most ubiquitous subfields. In this paper we survey the major seminal papers on the explanation and constraints, as well as some more recent works. The survey sets out to unify many disparate lines of work in areas such as model-based diagnosis, constraint programming, Boolean satisfiability, truth maintenance systems, quantified logics, and related areas
On the Complexity of Axiom Pinpointing in Description Logics
We investigate the computational complexity of axiom pinpointing in Description Logics, which is the task of finding minimal subsets of a knowledge base that have a given consequence. We consider the problems of enumerating such subsets with and without order, and show hardness results that already hold for the propositional Horn fragment, or for the Description Logic EL. We show complexity results for several other related decision and enumeration problems for these fragments that extend to more expressive logics. In particular we show that hardness of these problems depends not only on expressivity of the fragment but also on the shape of the axioms used
- …