257 research outputs found
The Complexity of Reverse Engineering Problems for Conjunctive Queries
Reverse engineering problems for conjunctive queries (CQs), such as query by example (QBE) or definability, take a set of user examples and convert them into an explanatory CQ. Despite their importance, the complexity of these problems is prohibitively high (coNEXPTIME-complete). We isolate their two main sources of complexity and propose relaxations of them that reduce the complexity while having meaningful theoretical interpretations. The first relaxation is based on the idea of using existential pebble games for approximating homomorphism tests. We show that this characterizes QBE/definability for CQs up to treewidth k, while reducing the complexity to EXPTIME. As a side result, we obtain that the complexity of the QBE/definability problems for CQs of treewidth k is EXPTIME-complete for each k > 1. The second relaxation is based on the idea of "desynchronizing" direct products, which characterizes QBE/definability for unions of CQs and reduces the complexity to coNP. The combination of these two relaxations yields tractability for QBE and characterizes it in terms of unions of CQs of treewidth at most k. We also study the complexity of these problems for conjunctive regular path queries over graph databases, showing them to be no more difficult than for CQs
Explanation in constraint satisfaction: A survey
Much of the focus on explanation in the field of artificial intelligence has focused on machine learning methods and, in particular, concepts produced by advanced methods such as neural networks and deep learning. However, there has been a long history of explanation generation in the general field of constraint satisfaction, one of the AI's most ubiquitous subfields. In this paper we survey the major seminal papers on the explanation and constraints, as well as some more recent works. The survey sets out to unify many disparate lines of work in areas such as model-based diagnosis, constraint programming, Boolean satisfiability, truth maintenance systems, quantified logics, and related areas
Finding counterfactual explanations through constraint relaxations
Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints. A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system. This approach allows the system to provide an explanation as: "if the user is willing to drop out some of their constraints, there exists a solution". However, one can criticise this form of explanation as not being very informative. A counterfactual explanation is a type of explanation that can provide a basis for the user to recover feasibility by helping them understand which changes can be applied to their existing constraints rather than removing them. This approach has been extensively studied in the machine learning field, but requires a more thorough investigation in the context of constraint satisfaction. We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems to help compute a counterfactual explanation
Generalizing Consistency and other Constraint Properties to Quantified Constraints
Quantified constraints and Quantified Boolean Formulae are typically much
more difficult to reason with than classical constraints, because quantifier
alternation makes the usual notion of solution inappropriate. As a consequence,
basic properties of Constraint Satisfaction Problems (CSP), such as consistency
or substitutability, are not completely understood in the quantified case.
These properties are important because they are the basis of most of the
reasoning methods used to solve classical (existentially quantified)
constraints, and one would like to benefit from similar reasoning methods in
the resolution of quantified constraints. In this paper, we show that most of
the properties that are used by solvers for CSP can be generalized to
quantified CSP. This requires a re-thinking of a number of basic concepts; in
particular, we propose a notion of outcome that generalizes the classical
notion of solution and on which all definitions are based. We propose a
systematic study of the relations which hold between these properties, as well
as complexity results regarding the decision of these properties. Finally, and
since these problems are typically intractable, we generalize the approach used
in CSP and propose weaker, easier to check notions based on locality, which
allow to detect these properties incompletely but in polynomial time
Quantified Constraints in Twenty Seventeen
I present a survey of recent advances in the algorithmic and computational complexity theory of non-Boolean Quantified Constraint Satisfaction Problems, incorporating some more modern research directions
Computing explanations for interactive constraint-based systems
Constraint programming has emerged as a successful paradigm for modelling
combinatorial problems arising from practical situations. In many of those situations,
we are not provided with an immutable set of constraints. Instead, a user
will modify his requirements, in an interactive fashion, until he is satisfied with
a solution. Examples of such applications include, amongst others, model-based
diagnosis, expert systems, product configurators.
The system he interacts with must be able to assist him by showing the consequences
of his requirements. Explanations are the ideal tool for providing this
assistance. However, existing notions of explanations fail to provide sufficient information.
We define new forms of explanations that aim to be more informative.
Even if explanation generation is a very hard task, in the applications we consider,
we must manage to provide a satisfactory level of interactivity and, therefore, we
cannot afford long computational times.
We introduce the concept of representative sets of relaxations, a compact set of
relaxations that shows the user at least one way to satisfy each of his requirements
and at least one way to relax them, and present an algorithm that efficiently computes
such sets. We introduce the concept of most soluble relaxations, maximising
the number of products they allow. We present algorithms to compute such relaxations
in times compatible with interactivity, achieving this by indifferently making
use of different types of compiled representations. We propose to generalise
the concept of prime implicates to constraint problems with the concept of domain
consequences, and suggest to generate them as a compilation strategy. This sets a
new approach in compilation, and allows to address explanation-related queries in
an efficient way. We define ordered automata to compactly represent large sets of
domain consequences, in an orthogonal way from existing compilation techniques
that represent large sets of solutions
Proceedings of the 18th Irish Conference on Artificial Intelligence and Cognitive Science
These proceedings contain the papers that were accepted for publication at AICS-2007, the 18th Annual Conference on Artificial Intelligence and Cognitive Science, which was held in the Technological University Dublin; Dublin, Ireland; on the 29th to the 31st August 2007. AICS is the annual conference of the Artificial Intelligence Association of Ireland (AIAI)
Justicia: A Stochastic SAT Approach to Formally Verify Fairness
As a technology ML is oblivious to societal good or bad, and thus, the field
of fair machine learning has stepped up to propose multiple mathematical
definitions, algorithms, and systems to ensure different notions of fairness in
ML applications. Given the multitude of propositions, it has become imperative
to formally verify the fairness metrics satisfied by different algorithms on
different datasets. In this paper, we propose a \textit{stochastic
satisfiability} (SSAT) framework, Justicia, that formally verifies different
fairness measures of supervised learning algorithms with respect to the
underlying data distribution. We instantiate Justicia on multiple
classification and bias mitigation algorithms, and datasets to verify different
fairness metrics, such as disparate impact, statistical parity, and equalized
odds. Justicia is scalable, accurate, and operates on non-Boolean and compound
sensitive attributes unlike existing distribution-based verifiers, such as
FairSquare and VeriFair. Being distribution-based by design, Justicia is more
robust than the verifiers, such as AIF360, that operate on specific test
samples. We also theoretically bound the finite-sample error of the verified
fairness measure.Comment: 24 pages, 7 figures, 5 theorem
- …