40 research outputs found
On the Complexity of Counterfactual Reasoning
We study the computational complexity of counterfactual reasoning in relation
to the complexity of associational and interventional reasoning on structural
causal models (SCMs). We show that counterfactual reasoning is no harder than
associational or interventional reasoning on fully specified SCMs in the
context of two computational frameworks. The first framework is based on the
notion of treewidth and includes the classical variable elimination and
jointree algorithms. The second framework is based on the more recent and
refined notion of causal treewidth which is directed towards models with
functional dependencies such as SCMs. Our results are constructive and based on
bounding the (causal) treewidth of twin networks -- used in standard
counterfactual reasoning that contemplates two worlds, real and imaginary -- to
the (causal) treewidth of the underlying SCM structure. In particular, we show
that the latter (causal) treewidth is no more than twice the former plus one.
Hence, if associational or interventional reasoning is tractable on a fully
specified SCM then counterfactual reasoning is tractable too. We extend our
results to general counterfactual reasoning that requires contemplating more
than two worlds and discuss applications of our results to counterfactual
reasoning with a partially specified SCM that is coupled with data. We finally
present empirical results that measure the gap between the complexities of
counterfactual reasoning and associational/interventional reasoning on random
SCMs.Comment: An earlier version of this paper appeared in NeurIPS 2022 workshop,
"A causal view on dynamical systems.
Online Inference for Adaptive Diagnosis via Arithmetic Circuit Compilation of Bayesian Networks
International audienceConsidering technology and complexity evolution the design of fully reliable embedded systems will be prohibitively complex and costly. Onboard diagnosis is a first solution that can be achieved by means of Bayesian networks. An efficient compilation of Bayesian inference is proposed using Arithmetic Circuits (AC). ACs can be efficiently implemented in hardware to get very fast response time. This approach has been recently experimented in Software Health Management of aircrafts or UAVs. However, there are two kinds of obstacles that must be addressed. First, the tree complexity can lead to intractable solutions and second, an offline static analysis cannot capture the dynamic behaviour of a system that can have multiple configurations and applications. In this paper, we present our direction to solve these issues. Our approach relies on an adaptive version of the diagnosis computation for different kinds of applications/missions of UAVs. In particular, we consider an incremental generation of the AC structure. This adaptive diagnosis can be implemented using dynamic reconfiguration of FPGA circuits
Model-Based Diagnosis using Structured System Descriptions
This paper presents a comprehensive approach for model-based diagnosis which
includes proposals for characterizing and computing preferred diagnoses,
assuming that the system description is augmented with a system structure (a
directed graph explicating the interconnections between system components).
Specifically, we first introduce the notion of a consequence, which is a
syntactically unconstrained propositional sentence that characterizes all
consistency-based diagnoses and show that standard characterizations of
diagnoses, such as minimal conflicts, correspond to syntactic variations on a
consequence. Second, we propose a new syntactic variation on the consequence
known as negation normal form (NNF) and discuss its merits compared to standard
variations. Third, we introduce a basic algorithm for computing consequences in
NNF given a structured system description. We show that if the system structure
does not contain cycles, then there is always a linear-size consequence in NNF
which can be computed in linear time. For arbitrary system structures, we show
a precise connection between the complexity of computing consequences and the
topology of the underlying system structure. Finally, we present an algorithm
that enumerates the preferred diagnoses characterized by a consequence. The
algorithm is shown to take linear time in the size of the consequence if the
preference criterion satisfies some general conditions.Comment: See http://www.jair.org/ for any accompanying file
Evaluation of optimization techniques for aggregation
Aggregations are almost always done at the top of operator tree after all selections
and joins in a SQL query. But actually they can be done before joins and make later
joins much cheaper when used properly. Although some enumeration algorithms
considering eager aggregation are proposed, no sufficient evaluations are available
to guide the adoption of this technique in practice. And no evaluations are done
for real data sets and real queries with estimated cardinalities. That means it is not
known how eager aggregation performs in the real world.
In this thesis, a new estimation method for group by and join combining traditional
estimation method and index-based join sampling is proposed and evaluated.
Two enumeration algorithms considering eager aggregation are implemented and
compared in the context of estimated cardinality. We find that the new estimation
method works well with little overhead and that under certain conditions, eager
aggregation can dramatically accelerate queries
Fast factorisation of probabilistic potentials and its application to approximate inference in Bayesian networks
We present an efficient procedure for factorising probabilistic potentials represented as
probability trees. This new procedure is able to detect some regularities that cannot be
captured by existing methods. In cases where an exact decomposition is not achievable,
we propose a heuristic way to carry out approximate factorisations guided by a parameter
called factorisation degree, which is fast to compute. We show how this parameter can be
used to control the tradeoff between complexity and accuracy in approximate inference
algorithms for Bayesian networks
Binary Join Trees for Computing Marginals in the Shenoy-Shafer Architecture
The main goal of this paper is to describe a data structure called binary join trees that are useful in computing multiple marginals efficiently in the Shenoy-Shafer architecture. We define binary join trees, describe their utility, and describe a procedure for constructing them