39 research outputs found
A Symbolic Language for Interpreting Decision Trees
The recent development of formal explainable AI has disputed the folklore
claim that "decision trees are readily interpretable models", showing different
interpretability queries that are computationally hard on decision trees, as
well as proposing different methods to deal with them in practice. Nonetheless,
no single explainability query or score works as a "silver bullet" that is
appropriate for every context and end-user. This naturally suggests the
possibility of "interpretability languages" in which a wide variety of queries
can be expressed, giving control to the end-user to tailor queries to their
particular needs. In this context, our work presents ExplainDT, a symbolic
language for interpreting decision trees. ExplainDT is rooted in a carefully
constructed fragment of first-ordered logic that we call StratiFOILed.
StratiFOILed balances expressiveness and complexity of evaluation, allowing for
the computation of many post-hoc explanations--both local (e.g., abductive and
contrastive explanations) and global ones (e.g., feature relevancy)--while
remaining in the Boolean Hierarchy over NP. Furthermore, StratiFOILed queries
can be written as a Boolean combination of NP-problems, thus allowing us to
evaluate them in practice with a constant number of calls to a SAT solver. On
the theoretical side, our main contribution is an in-depth analysis of the
expressiveness and complexity of StratiFOILed, while on the practical side, we
provide an optimized implementation for encoding StratiFOILed queries as
propositional formulas, together with an experimental study on its efficiency
No silver bullet: interpretable ML models must be explained
Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation
QSETH strikes again: finer quantum lower bounds for lattice problem, strong simulation, hitting set problem, and more
While seemingly undesirable, it is not a surprising fact that there are
certain problems for which quantum computers offer no computational advantage
over their respective classical counterparts. Moreover, there are problems for
which there is no `useful' computational advantage possible with the current
quantum hardware. This situation however can be beneficial if we don't want
quantum computers to solve certain problems fast - say problems relevant to
post-quantum cryptography. In such a situation, we would like to have evidence
that it is difficult to solve those problems on quantum computers; but what is
their exact complexity?
To do so one has to prove lower bounds, but proving unconditional time lower
bounds has never been easy. As a result, resorting to conditional lower bounds
has been quite popular in the classical community and is gaining momentum in
the quantum community. In this paper, by the use of the QSETH framework
[Buhrman-Patro-Speelman 2021], we are able to understand the quantum complexity
of a few natural variants of CNFSAT, such as parity-CNFSAT or counting-CNFSAT,
and also are able to comment on the non-trivial complexity of
approximate-#CNFSAT; both of these have interesting implications about the
complexity of (variations of) lattice problems, strong simulation and hitting
set problem, and more.
In the process, we explore the QSETH framework in greater detail than was
(required and) discussed in the original paper, thus also serving as a useful
guide on how to effectively use the QSETH framework.Comment: 34 pages, 2 tables, 2 figure
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum
Efficient and Generic Algorithms for Quantitative Attack Tree Analysis
Numerous analysis methods for quantitative attack tree analysis have been
proposed. These algorithms compute relevant security metrics, i.e. performance
indicators that quantify how good the security of a system is; typical metrics
being the most likely attack, the cheapest, or the most damaging one. However,
existing methods are only geared towards specific metrics or do not work on
general attack trees. This paper classifies attack trees in two dimensions:
proper trees vs. directed acyclic graphs (i.e. with shared subtrees); and
static vs. dynamic gates. For three out of these four classes, we propose novel
algorithms that work over a generic attribute domain, encompassing a large
number of concrete security metrics defined on the attack tree semantics;
dynamic attack trees with directed acyclic graph structure are left as an open
problem. We also analyse the computational complexity of our methods.Comment: Funding: ERC Consolidator (Grant Number: 864075), and European Union
(Grant Number: 101067199-ProSVED), in IEEE Transactions on Dependable and
Secure Computing, 2022. arXiv admin note: substantial text overlap with
arXiv:2105.0751
Efficient local search for Pseudo Boolean Optimization
Algorithms and the Foundations of Software technolog