5,347 research outputs found
Backdoors to Normality for Disjunctive Logic Programs
Over the last two decades, propositional satisfiability (SAT) has become one
of the most successful and widely applied techniques for the solution of
NP-complete problems. The aim of this paper is to investigate theoretically how
Sat can be utilized for the efficient solution of problems that are harder than
NP or co-NP. In particular, we consider the fundamental reasoning problems in
propositional disjunctive answer set programming (ASP), Brave Reasoning and
Skeptical Reasoning, which ask whether a given atom is contained in at least
one or in all answer sets, respectively. Both problems are located at the
second level of the Polynomial Hierarchy and thus assumed to be harder than NP
or co-NP. One cannot transform these two reasoning problems into SAT in
polynomial time, unless the Polynomial Hierarchy collapses. We show that
certain structural aspects of disjunctive logic programs can be utilized to
break through this complexity barrier, using new techniques from Parameterized
Complexity. In particular, we exhibit transformations from Brave and Skeptical
Reasoning to SAT that run in time O(2^k n^2) where k is a structural parameter
of the instance and n the input size. In other words, the reduction is
fixed-parameter tractable for parameter k. As the parameter k we take the size
of a smallest backdoor with respect to the class of normal (i.e.,
disjunction-free) programs. Such a backdoor is a set of atoms that when deleted
makes the program normal. In consequence, the combinatorial explosion, which is
expected when transforming a problem from the second level of the Polynomial
Hierarchy to the first level, can now be confined to the parameter k, while the
running time of the reduction is polynomial in the input size n, where the
order of the polynomial is independent of k.Comment: A short version will appear in the Proceedings of the Proceedings of
the 27th AAAI Conference on Artificial Intelligence (AAAI'13). A preliminary
version of the paper was presented on the workshop Answer Set Programming and
Other Computing Paradigms (ASPOCP 2012), 5th International Workshop,
September 4, 2012, Budapest, Hungar
Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning under structural
restrictions. All these problems involve two tasks: (i) identifying the
structure in the input as required by the restriction, and (ii) using the
identified structure to solve the reasoning task efficiently. We show that for
most of the considered problems, task (i) admits a polynomial-time
preprocessing to a problem kernel whose size is polynomial in a structural
problem parameter of the input, in contrast to task (ii) which does not admit
such a reduction to a problem kernel of polynomial size, subject to a
complexity theoretic assumption. As a notable exception we show that the
consistency problem for the AtMost-NValue constraint admits a polynomial kernel
consisting of a quadratic number of variables and domain values. Our results
provide a firm worst-case guarantees and theoretical boundaries for the
performance of polynomial-time preprocessing algorithms for the considered
problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541,
arXiv:1104.556
Limits of Preprocessing
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a
complexity theoretic assumption, none of the considered problems can be reduced
by polynomial-time preprocessing to a problem kernel whose size is polynomial
in a structural problem parameter of the input, such as induced width or
backdoor size. Our results provide a firm theoretical boundary for the
performance of polynomial-time preprocessing algorithms for the considered
problems.Comment: This is a slightly longer version of a paper that appeared in the
proceedings of AAAI 201
Counting Complexity for Reasoning in Abstract Argumentation
In this paper, we consider counting and projected model counting of
extensions in abstract argumentation for various semantics. When asking for
projected counts we are interested in counting the number of extensions of a
given argumentation framework while multiple extensions that are identical when
restricted to the projected arguments count as only one projected extension. We
establish classical complexity results and parameterized complexity results
when the problems are parameterized by treewidth of the undirected
argumentation graph. To obtain upper bounds for counting projected extensions,
we introduce novel algorithms that exploit small treewidth of the undirected
argumentation graph of the input instance by dynamic programming (DP). Our
algorithms run in time double or triple exponential in the treewidth depending
on the considered semantics. Finally, we take the exponential time hypothesis
(ETH) into account and establish lower bounds of bounded treewidth algorithms
for counting extensions and projected extension.Comment: Extended version of a paper published at AAAI-1
Combinatorial Voter Control in Elections
Voter control problems model situations such as an external agent trying to
affect the result of an election by adding voters, for example by convincing
some voters to vote who would otherwise not attend the election. Traditionally,
voters are added one at a time, with the goal of making a distinguished
alternative win by adding a minimum number of voters. In this paper, we
initiate the study of combinatorial variants of control by adding voters: In
our setting, when we choose to add a voter~, we also have to add a whole
bundle of voters associated with . We study the computational
complexity of this problem for two of the most basic voting rules, namely the
Plurality rule and the Condorcet rule.Comment: An extended abstract appears in MFCS 201
From Causes for Database Queries to Repairs and Model-Based Diagnosis and Back
In this work we establish and investigate connections between causes for
query answers in databases, database repairs wrt. denial constraints, and
consistency-based diagnosis. The first two are relatively new research areas in
databases, and the third one is an established subject in knowledge
representation. We show how to obtain database repairs from causes, and the
other way around. Causality problems are formulated as diagnosis problems, and
the diagnoses provide causes and their responsibilities. The vast body of
research on database repairs can be applied to the newer problems of computing
actual causes for query answers and their responsibilities. These connections,
which are interesting per se, allow us, after a transition -inspired by
consistency-based diagnosis- to computational problems on hitting sets and
vertex covers in hypergraphs, to obtain several new algorithmic and complexity
results for database causality.Comment: To appear in Theory of Computing Systems. By invitation to special
issue with extended papers from ICDT 2015 (paper arXiv:1412.4311
- …